-
-
Notifications
You must be signed in to change notification settings - Fork 458
ZLUDA
ZLUDA (CUDA Wrapper) for AMD GPUs in Windows
ZLUDA does not fully support PyTorch in its official build. So ZLUDA support is so tricky and unstable. Support is limited at this time. Please don't create issues regarding ZLUDA on GitHub. Feel free to reach out via the ZLUDA thread in the help channel on discord.
This guide assumes you have Git and Python installed, and are comfortable using the command prompt, navigating Windows Explorer, renaming files and folders, and working with zip files.
If you have an integrated AMD GPU (iGPU), you may need to disable it, or use the HIP_VISIBLE_DEVICES
environment variable.
Note: Most everyone would have this anyway, since it comes with a lot of games, but there's no harm in trying to install it.
Grab the latest version of Visual C++ Runtime from https://aka.ms/vs/17/release/vc_redist.x64.exe (this is a direct download link) and then run it.
If you get the options to Repair or Uninstall, then you already have it installed and can click Close. Otherwise, install it.
ZLUDA is now auto-installed, and automatically added to PATH, when starting webui.bat with --use-zluda
.
Install HIP SDK 6.2 from https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html
So long as your regular AMD GPU driver is up to date, you don't need to install the PRO driver HIP SDK suggests.
Go to https://rocm.docs.amd.com/projects/install-on-windows/en/develop/reference/system-requirements.html and find your GPU model.
If your GPU model has a ✅ in both columns then skip to Install SD.Next.
If your GPU model has an ❌ in the HIP SDK column, or if your GPU isn't listed, follow the instructions below;
- Open Windows Explorer and copy and paste
C:\Program Files\AMD\ROCm\6.2\bin\rocblas
into the location bar.
(Assuming you've installed the HIP SDK in the default location and Windows is located on C:). - Make a copy of the
library
folder, for backup purposes. - Download one of the unofficial rocBLAS library, and unzip them in the original library folder, overwriting any files there.
gfx1010: RX 5700, RX 5700 XT
gfx1012: RX 5500, RX 5500 XT
gfx1031: RX 6700, RX 6700 XT, RX 6750 XT
gfx1032: RX 6600, RX 6600 XT, RX 6650 XT
gfx1103: Radeon 780M
gfx803: RX 570, RX 580
More... - Open the zip file.
- Drag and drop the
library
folder from zip file into%HIP_PATH%bin\rocblas
(The folder you opened in step 1). - Reboot PC
If your GPU model not in the HIP SDK column or not available in the above list, follow the instructions in ROCm Support guide to build your own RocblasLibs.
(Note: Building your own libraries is not for the faint of heart.)
Using Windows Explorer, navigate to a place you'd like to install SD.Next. This should be a folder which your user account has read/write/execute access to. Installing SD.Next in a directory which requires admin permissions may cause it to not launch properly.
Note: Refrain from installing SD.Next into the Program Files, Users, or Windows folders, this includes the OneDrive folder or on the Desktop, or into a folder that begins with a period; (eg: .sdnext
).
The best place would be on an SSD for model loading.
In the Location Bar, type cmd
, then hit [Enter]. This will open a Command Prompt window at that location.
Copy and paste the following commands into the Command Prompt window, one at a time;
git clone https://github.com/vladmandic/sdnext
then
cd automatic
then
.\webui.bat --use-zluda --debug --autolaunch
Note: ZLUDA functions best in Diffusers Backend, where certain Diffusers-only options are available.
After the UI starts, head on over to the System Tab (Standard UI) or the Settings Tab (Modern UI), then the Compute Settings category.
Set "Attention optimization method" to "Dynamic Attention BMM", then click Apply settings.
Now, try to generate something.
This should take a fair while to compile (10-15mins, or even longer; some reports state over an hour), but this compilation should only need to be done once.
Note: The text Compilation is in progress. Please wait...
will repeatedly appear, just be patient. Eventually your image will start generating.
Subsequent generations will be significantly quicker.
If you have problem with ZLUDA after updating SD.Next, upgrading ZLUDA may help.
- Remove
.zluda
folder. - Launch WebUI. The installer will download and install newer ZLUDA.
※ You may have to wait for a while to compile as the first generation.
MIOpen, the alternative of cuDNN for AMDGPUs, hasn't been released on Windows yet.
However, you can enable it with a custom build of MIOpen.
This section describes how to enable cuDNN.
- Switch to
dev
branch. - Install HIP SDK 6.2. If you already have older HIP SDK, uninstall it before installing 6.2.
- Remove
.zluda
folder if exists.
※ If you have setZLUDA
environment variable, download the latest nightly ZLUDA from here.
※ If you built ZLUDA yourself, pull latest commits of ZLUDA and rebuild with--nightly
. - Download and install HIP SDK extension from here.
(unzip and paste folders uponpath/to/AMD/ROCm/6.2
) - Launch WebUI with environment variable
ZLUDA_NIGHTLY=1
.
The first generation will take long time because MIOpen has to find the optimal solution and cache it.
If you get driver crashes, restart webui and try again.
- Go to
Compute Settings
. - Change precision type to
FP16
. - Change attention optimization method to
Scaled-Dot-Product
. - Enable Flash attention and turn Dynamic attention off in SDP options.
- Go to
Backend Settings
. - Enable
Deterministic mode
.
hipBLASLt, the alternative of cuBLASLt for AMDGPUs, hasn't been released on Windows yet.
However, there're unofficial builds available.
This section describes how to enable cuBLASLt.
- Install HIP SDK 6.2. If you already have older HIP SDK, uninstall it before installing 6.2.
- Remove
.zluda
folder if exists.
※ If you have setZLUDA
environment variable, download the latest nightly ZLUDA from here.
※ If you built ZLUDA yourself, pull latest commits of ZLUDA and rebuild with--nightly
. - Download and install unofficial hipBLASLt build.
gfx1100, gfx1101, gfx1102, gfx1103, or gfx1150 - Launch WebUI with environment variable
ZLUDA_NIGHTLY=1
.
DirectML | ZLUDA | |
---|---|---|
Speed | Slower | Faster |
VRAM Usage | More | Less |
VRAM GC | ❌ | ✅ |
Traning | * | ✅ |
Flash Attention | ❌ | ❌ |
FFT | ❓ | ✅ |
FFTW | ❓ | ❌ |
DNN | ❓ | |
RTC | ❓ | |
Source Code | Closed | Opened |
Python | <=3.12 | Same as CUDA |
❓: unknown
*: known as possible, but uses too much VRAM to train stable diffusion models/LoRAs/etc.
DTYPE | |
---|---|
FP64 | ✅ |
FP32 | ✅ |
FP16 | ✅ |
BF16 | ✅ |
LONG | ✅ |
INT8 | ✅ |
UINT8 | ✅* |
INT4 | ❓ |
FP8 | |
BF8 |
*: Not tested.