-
Notifications
You must be signed in to change notification settings - Fork 27.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Multi GPU data parelleling #311
Comments
I'll accept PRs for this as long as they don't make too much changes all over the place. |
Actually @AUTOMATIC1111 , I believe the changes are limited to 5 files, which are encased on a wrapper for multiprocessing in torch, like the scriplet below (from server.py on the link posted by @aeon3 ): import gradio as gr if name == "main":
I'm unfortunately not a specialist in python, else I'd be glad to help, as the requested feature is something I'm also interested in. Thanks you and all the credited ppl for all the amazing work. |
I'll be willing to test the code if someone writes it 😅 I have a single system with 8 various GPUs (sizes, architectures, etc) installed. |
A little more insight into this matter - on the original model, there are several options that enabled the multiprocessing. Inserted in sequence: --strategy=gpu --auto_select_gpus=true --devices=<num_gpu> --num_nodes=<num_gpu> I don't know how if the options are being passed through to the backend stabble-diffusion engine, but I believe if there's a chance to do that, we'll have the functionality working. |
@AUTOMATIC1111 , I imagine you're really busy with all the requests and bugs, but if you have 5 minutes, have a look at this file on Nickluche's project: https://github.com/NickLucche/stable-diffusion-nvidia-docker/blob/master/parallel.py He apparently generated an external wrapper to call the application, allowing it to query if there are or not multi-gpus, and in case there are, data parallel comes into play. |
This would be a game changer! |
Agreed on this being a game-changer. There are currently some issues that I found and Nick is looking into, but after everything's ironed out, yes -- it would be amazing if this repo used NickLucche's code as a downstream consumer or something. :) |
duplicate of #156 |
This is the most intuitive and complete webui fork. It would be amazing if this could be implemented here:
NickLucche/stable-diffusion-nvidia-docker#8
Potential do double image output even with the same VRAM is awesome.
The text was updated successfully, but these errors were encountered: