A downloadable tool for Windows

Requirement:

This project require a Nvidia Card that can run CUDA.

With a card with 4 vram, it should generate 256X512 images.


🎉 [V 0.5] Advertising [V 0.5]: 🎉

If you are enjoying my GUI and want more updates for it, check it out my Patreon:

https://www.patreon.com/DAINAPP

In the Patreon version you can run:

  • 512X512 with 4 Vram
  • Use upscaler 
  • Use img2img
  • Use inpainting
  • Load other models
  • A bunch of more options


What is this?

This is an interface to run the Stable Diffusion model.

In short: You write a text prompt and the model return you a image for each prompt.

You can read more about it here:

https://stability.ai/blog/stable-diffusion-public-release


Want some help with the prompts?

Check this site: https://lexica.art/


Running it:

Important: You should try to generate images at 512X512 for best results

A .exe to run Stable Diffusion, still super very alpha, so expect bugs.

Just open Stable Diffusion GRisk GUI.exe to start using it.

Resolution need to be multiple of 64 (64, 128, 192, 256, etc)



Read This:

Summary of the CreativeML OpenRAIL License:

1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content

2. We claim no rights on the outputs you generate, you are free to use them and are accountable for their use which should not go against the provisions set in the license

3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)

Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license


Important Stuff:

  • It seen that some GTX 1660 cards have a problem running models at half precision (only option in this GUI for now
  • Samples currently don't work, it will always generate 1 image per prompt, you can repeat the same prompt in a lot of lines for a similar effect to samples.
  • The AI usually give good results at 512X512, other resolutions may affect the quality.
  • More steps = Better quality, more steps don't use more memory, just more time.
  • >= 150 steps is a good start.
  • This error appear on the .exe startup, it always appear on 0.1, but the app should still work:

torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001C305192700>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001C3051A7A60>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")


Updated 13 days ago
StatusReleased
CategoryTool
PlatformsWindows
Rating
Rated 5.0 out of 5 stars
(51)
AuthorGRisk
Tagsai, image-generation, model, python, pytorch, render, st, stable-diffusion
Average sessionAbout an hour
LanguagesEnglish
InputsKeyboard, Mouse

Download

Download
Stable Diffusion GRisk GUI.rar 3 GB

Comments

Log in with itch.io to leave a comment.

Viewing most recent comments 1 to 40 of 198 · Next page · Last page
(1 edit)

Will it be possible to use the DALL-E2 base in future versions? Thx.

I'm having this error when loading a dreambooth model:

Loading model (May take a little time.)

{'feature_extractor'} was not found in config. Values will be initialized to default values.

Traceback (most recent call last):

  File "start.py", line 320, in OnRender

  File "start.py", line 284, in LoadModel

  File "diffusers\pipeline_utils.py", line 247, in from_pretrained

TypeError: __init__() missing 1 required positional argument: 'feature_extractor'

(1 edit)

Hello, I got this error when running a 512x512 image, I'm using an nvidia 3060 ti, and 16gb of ram, any solutions?

RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 8.00 GiB total capacity; 2.56 GiB already allocated; 2.69 GiB free; 2.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I dont understand, 2,69 GiB is not enough to allocate 512 MiB? xD

Thanks in advance

I got the patreon version, it works great! having tons of fun. Thanks for putting it together!

On my 1660 TI it just outputs black, and its not overclocked.

(1 edit) (+1)

The free version has a bug that breaks the AI when using 16xx Nvidia cards, it's fixed on the Patreon, annoyingly.

Definitely sets off my pet peeve about devs withholding important bugfixes behind a paywall, though GRisk plans to eventually update the free one, whenever that is.

(+4)

Please look into adding Dreambooth please!

(+1)

Is there a Github for this?  

(+6)(-4)

when is the 0.5 version of the gui gonna come out on here for free?

(+5)(-5)

If you want it so bad, pay the guy a bit. Coding isn't easy, takes time and work. $10 to support good coders and programs is worth it.

(+4)(-9)

That's not an acceptable response. Standard procedure for these situations is the latest version spends some time on Patreon, then released is publicly. They we're only asking a reasonable question, you dingus. 

(+5)(-3)

"That's not an acceptable response" says the person name calling in a response. Paying coders is a proper response if you want something early. The coder is under no obligation to release it publicly for free. So, you can either wait patiently, or pay the coder for their efforts. I respect the coder for even releasing it for free at all. When the coder is ready to put it up for free, then they will.  The coder needs to make a living, so your entitlement doesn't matter. 

Have a great day.

(2 edits) (+1)(-7)

Okay, when they asked? They weren’t bitching and moaning. They were simply asking a factual question. You need to get over yourself, promptly…. 

You simply have a twisted worldview and think that we’re all entitled little brats or something. They were just asking when it’s going to be available. Again, stop being a dick. 

(+3)(-2)

Says the person again calling names, acting aggressive, and telling a person with an opinion what they and another person who you don't know were thinking.


You must want drama and argument over the Internet.


Feel free to reply with more signaling and reactionism. I'll let you win. 


End of line. 

(+4)(-2)

The only one being a dick is you, bro, you're a sentient human, maybe grow some self-awareness. 


You're acting extremely entitled, assuming that a creator has to deliver you a free product under a stringent time constraint??? 

Since fucking when?

What IS standard procedure, is you paying someone for their work or shutting the fuck up and being happy they're giving you anything at all for free.


Nothing better than an entitled man-child with a chip on his shoulder letting his entitlement run wild! 

(+1)(-2)

You are just telling that to get a bug fix, that allows 3% (according to steam, not the newest info) of every gpu owner (i am talking about gtx 16xx guys) I have to pay 10$, like, are u crazy? + he is not even a developer of stable diffusion, he just did a gui version, for just GUI app 10$, just listen to it, 10$ for a gui. And I wouldnt write this reply if I could download an original version, but i can't cuz, "you don't have visual c++, idc that you downloaded it 5 times, lol". so, better approach would be, VIP with all those features that it has, and free with only bugfixes.

(+1)(-2)

well, i see only two options for me, pirate it or i fix it for myself, somehow

(+1)(-1)

this is exactly why i want the patreon version

btw i only asked this cuz of the one bugfix that fixes black output images on gtx 16xx cards, which is only available in the latest version on patreon, and not on here.

(+1)(-1)

its on kemono

(-1)

i cant find it, send link to exact post on there

(+2)

it only outputs a black picture for me, did I do something wrong?

(+1)

That's commonly caused by 16xx Nvidia GPUs, as far as I know. No clue what else causes it.

If you've got a 1650, 1660, whatever GPU, I'm pretty sure you're out of luck, the fix isn't enabled in the free version.

(+1)

yea i have a 1650 :(

(+1)

sad, try a google collab, it doesnt use your gpu and its fast

(+2)

i know, i did and it worked. thanks anyways though :)

(+1)(-1)

If you want to sell this, then why not simply sell it rather than go through Patreon?

(+1)(-1)

There is still to much problems and lacking options to be a full product. Possible in the future.

Hey there! Will you ever make a "image input" option?

(+1)

Hi there, on Patreon version is already possible. It may take some time to be public

(+1)

When will it be available for linux? if it ever will?

(+1)

Its a little hard since it require some code change and I need a machine running linux, but not impossible I think

(+1)

Thanks! I hope you can make it possible soon!

(1 edit) (+1)(-1)

There is no information about the tiers and access on patreon! Which tier do you have to have to access the download? Will there be any free updates here? What features does the patreon one even have? ???

(-2)

Umm... the download is on this page & requires no payment.

Patreon is to support the dev & only promises early access to any potential future updates.

(1 edit) (+5)(-1)

Wrong. Features previously available have already been blocked behind Patreon. 

(1 edit) (+1)

As wintergrey says, there is a free version on this page but that appears to be frozen at v0.1.  
I believe you need to pay £8 per month to get the current version on Patreon (0.5).  As you say, it isn't clear but I paid £4 then £8 - at £8 I could see the download.

(+1)(-1)

Patreon only charge you at the end of the month, so you are free to test the tiers if you like without paying anything. In the future they may become free updates, but not for now. 
There is already some updates on the patreon version, like better memory and img2img

(+1)(-1)

Sorry but no, Patreon charges upfront for any tier so there is no free testing of tiers and they charge the first of the month no matter when you joined the month before. This is the week before the next month, paying $10 now in September and then having to pay $10 for October a week later is too soon for many people. On October 1st you will probably see an influx of patrons.

As someone who runs a little Patreon, nah.
Creators on Patreon get to choose if it's set to pay upfront, or just at the first of the month, and GRisk's Patreon is set to the first of the month. I still haven't been charged a cent since I subbed, but I have full access to the download just fine.

Someone could legitimately sub to GRisk, unsub at the end of the month and resub at the beginning of the next month and get full access for free forever, which is probably why most creators switch to the "pay upfront" mode.

Really, how did you sub to GRisk's Patreon without paying?  I was charged as soon as I subscribed. Very interesting

(+1)

is there a chance you will make that for linux with a .deb ?

(+1)

I never experimented with linux, first would need a machine running linux as well

(+1)

AMD GPUs tends to have tons of memory but well, CUDA-only applications can't take advantage of it.

(+1)

Its possible to run pytorch scripts on AMD, my rife-app runs on AMD as well. But it take some extra work

(5 edits) (-1)

What exactly do I get with Patreon? Do I need to stay subbed to get new versions? How about using the software if I'm no longer a patron? Is there a list of features, i.e. how the "paid" version differs from this free one?

(+2)

You can get new downloads as long you tay subbed, you can keep using the application forever if you cancel the sub. The Patreon version already have a lot of new stuff and use less memory to run the model.

(1 edit) (+1)

hello, how do I add more models to pull from?  I have the 0.5 version btw

(+1)

There is a tutorial on Patreon. If you have any question you can send me a PM via Patreon

我的電腦有2個顯卡(3070.3080),如何讓軟件充分利用所有顯卡

Translation: "My computer has 2 graphics cards (3070.3080), how can I make the software take full advantage of all graphics cards"

Good question.  Never even thought of this.

is there a way to use less memory? slow it down or something. i cant even render a single 64x64 image without running out of memory

what GPU you have?

really nothing good. NVidia 940M

I thought that at least something small could be rendered with that but I guess I underestimated the program

Well if I try to generate 64x64 images, it uses all of my 3GB vRAM in my laptop. I don't know how much vRAM you have but you probably need at least more than 2GB to make it even function. But you might also need some newer features added to newer GPUs.

I can generate 320x320 images with my GTX 1050 mobile and 3 GB of vRAM, but it is still soooo slooooooooooow to generate something like a minute or so. I'm afraid you are out of luck with this kind of technology, I don't know about the Patreon version though. This software is really meant more for modern desktop GPUs where you can easily generate big images in the matter of seconds. But you can generate some images using StabilityAIs official demo here: https://huggingface.co/spaces/stabilityai/stable-diffusion

you will wait for the results a bit, but it will work. (most of the time anyway)

Btw. 64x64 will generate just some random streaks of color, at 256*256+ you can get low quality but somewhat decent images. 

(+1)

Patreon version use less memory. It should run with 4vram using 512x512

I only get a black image. Tryed with different prompts. Do I need to download a library to use it? If so, how do I do this, step by step?

You must have a card that don't like half precision models. In 0.51 on Patreon is should fix this

(1 edit)

It generates a black image for all prompts and all settings, using a gtx 1650 trying to generate "Minecraft Steve" with 150 iterations at 256*256

Same for me. Trying to figure out what I'm doing wrong...

This software have some problems with utilizing half-precision using gtx 16xx GPUs. It should work with full-precision, but the free version of this gui doesn't support it. An alternative for you could be this: 

NMKD Stable Diffusion GUI - AI Image Generator

I don't own 16xx GPU, so I can't really confirm, but it should work with 16xx GPUs if you set it up correctly.

Hi! I got the app to run successfully, but I'm unable to get it to utilize my better graphics card, and it keeps running off of my integrated graphics. I've tried all the layman's solutions, using the Nvidia control panel and system settings to specify which card that stable diffusion should use, but it keeps trying to allocate space in my integrated graphics card. Any help is appreciated.

Hi there. You will able to select the graphic card on 0.51, but only on patreon for now

my Ndivia MX330  2G error : RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 1.67 GiB already allocated; 0 bytes free; 1.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

What solutions ?

(1 edit)

Edit: nvm, I know what's wrong now, the software will not allow rendering images at a bigger resolution than 512x512, as soon as you try something more HD will give the error... So in conclusion, this is just a dummy testing tool. Not wort the time downloading.:::::::::::::

Getting the same error.

 My Nvidia RTX 2060 GDDR6 6GB

(+1)

Bummer, Nvidia only.

(+2)

GRisk, works great for me, looking forward to any updates you release. Thanks for creating this.

(1 edit) (+1)(-1)

Very Nice, I wouldn't recommend it tho, If you make an image with less than 512 by 512 res, it will look ugly, and not very detailed,

It'd be nice not to get the "Forbidden Error" when trying to download this.... riiiiight?

Worked really well up until I got the following error. Now it occurs every time, regardless of my chosen settings. I know it's not my RAM or VRAM, as I'm running 32gb of RAM and my 3070 has 8gb of VRAM. Any known workarounds/fixes?

Error message:

Traceback (most recent call last):

  File "start.py", line 363, in OnRender

  File "torch\autograd\grad_mode.py", line 27, in decorate_context

    return func(*args, **kwargs)

  File "diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 141, in __call__

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\unet_2d_condition.py", line 150, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\unet_blocks.py", line 505, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\attention.py", line 168, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\attention.py", line 196, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\attention.py", line 254, in forward

RuntimeError: CUDA out of memory. Tried to allocate 2.25 GiB (GPU 0; 8.00 GiB total capacity; 4.40 GiB already allocated; 0 bytes free; 6.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Oddly, as soon as I posted here it began working again, no settings changed. Neat feature!

it seems like this thing is very unstable for the most part.

i have some question:

1. is this safe for my device? some comments says their gpu died after using this because their gpu are overclocked. I have an RTX 3060 12GB OC Edition by Zotac, its OC from the box so im not sure if its the same as manual OC.

2. is this uncensored?

3. how long does it take to generate an image?

2. It's uncensored

3. I've RTX 3070 8GB and it takes 15s to create a 50 iteration image.

out of curiosity, how long do you think this would take on Intels integrated graphics?

(1 edit)

Can say an exact guess but it'll take much longer. Let's just download the package and try it. But you need at least 8GB VRAM to get 512x512 px images. less then this size is pretty small and useless.

Deleted 13 days ago
(+1)

ty so much for this, i have been looking for an easy to install gui everywhere for SD that actually works with gtx 16xx cards

(+1)

My pleasure. Happy prompting!

Hi there, can you post this again but without changing the font size? It really pollute the comments.

Question, can this AI make 1080p or 4k images?

(Going to be getting a RTX 3090 FTW in the future)

(+1)

Getting errors everytime I open the program:

torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x00000292B4667D30>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x00000292B468F0D0>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")

The program still works as expected

I recently downloaded the Stable Diffusion for PC and git a black image. I was advised to replace my graphic card.

After replacing my NVIDIA GeForce 1650 card with Gigabyte GeForce RTX 3060 Ti Eagle OC 8G I tried to run Stable Diffusion on my PC but got the following error:

torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000019888E120D0>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000019888E12310>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")

Render Start

Traceback (most recent call last):

  File "start.py", line 316, in OnRender

  File "start.py", line 207, in SavePrefab

PermissionError: [Errno 13] Permission denied: 'config_user.json'

Can anyone provide any advice?

Many thanks in advance!

The overclocking could be the main reason why you're just getting black images.

When I put my gpus clock to regular it worked normally.

I don't black images anymore. All I get is the list of errors attached in my earlier message.

Any ideas how to solve this problem? Is there a user manual for Soft Diffusion?

Sound like the tool is not allowed to create the config file. did you put it into "programs (x86)"? Try moving it to a temporary folder on your harddrive and see if it still happens

The Worst SD-GUI, I have tested so far! A lot of errors and bad performance! Absolutely not optimized at all!

Lacking a lot of features like RealESRGAN, GFPGAN, selectable gpu device, txt2img & img2img samplers, on the fly model switching, image viewer, masking, custom concepts, img2img, img2prompt, and so on... Absolute waste of time and disk space!!!

(+5)

what is one you recommend?

(1 edit)

Yeah, 0.1 is pretty awful, but the paid version is currently on 0.41 and it's practically magical in terms of performance and speed, and includes a good chunk of the stuff you've listed.
Definitely has the lowest VRAM usage out of all the AI GUIs too I think, especially since they added the low VRAM usage toggle. I'm pretty sure some people are rendering with 3GB VRAM and making pretty solid images at a decent speed, too.

I'd really like to know when/if GRisk ever plans to update the free version, or if it's just doomed to stay as some janky unfinished load of WIP to serve as a "demo" for the paid version.

where can I pay for it, does this work on Mac as well?

(How much is it btw?)

No idea about Mac, but https://www.patreon.com/DAINAPP/posts

Deleted 19 days ago
(+1)

torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001C0E1363820>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")

torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001C0E13638B0>.

  warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")

//how to fix this//

Did you solve this? I have the same problem

I'm pretty sure it's only a warning that you can ignore

https://discuss.pytorch.org/t/issue-with-torch-git-source-in-pyinstaller/135446/...

Yes, I think it's because you can't use images as a source unless you pay $8 a month!

I love it !!!!

(1 edit)

Can anyone get the inpainting working? I tried several way doesn't seem to work. Maybe it's only partially implemented as I don't see a way to upload the original file and the mask.

Viewing most recent comments 1 to 40 of 198 · Next page · Last page