- Create a RunPod Account.
- Create a RunPod Network Volume.
- Attach the Network Volume to a Secure Cloud GPU pod.
- Select a light-weight template such as RunPod Pytorch.
- Deploy the GPU Cloud pod.
- Once the pod is up, open a Terminal and install the required dependencies. This can either be done by using the installation script, or manually.
You can run this automatic installation script which will automatically install all of the dependencies that get installed manually below, and then you don't need to follow any of the manual instructions.
wget https://raw.githubusercontent.com/ashleykleynhans/runpod-worker-forge/main/scripts/install.sh
chmod +x install.sh
./install.sh
You only need to complete the steps below if you did not run the automatic installation script above.
- Install the Stable Diffusion WebUI Forge:
# Clone the repo
cd /workspace
git clone --depth=1 https://github.com/lllyasviel/stable-diffusion-webui-forge.git
# Upgrade Python
apt update
apt -y upgrade
# Ensure Python version is 3.10.12
python3 -V
# Create and activate venv
cd stable-diffusion-webui-forge
python3 -m venv /workspace/venv
source /workspace/venv/bin/activate
# Install Torch and xformers
pip3 install --no-cache-dir torch==2.1.2 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip3 install --no-cache-dir xformers==0.0.23.post1
# Install Stable Diffusion WebUI Forge
wget https://raw.githubusercontent.com/ashleykleynhans/runpod-worker-forge/main/install-forge.py
python3 -m install-forge --skip-torch-cuda-test
# Clone the ReActor Extension
cd /workspace/stable-diffusion-webui-forge
git clone https://github.com/Gourieff/sd-webui-reactor.git extensions/sd-webui-reactor
# Install dependencies for ReActor
cd /workspace/stable-diffusion-webui-forge/extensions/sd-webui-reactor
git checkout v0.6.1
pip3 install -r requirements.txt
pip3 install onnxruntime-gpu
# Install the model for ReActor
mkdir -p /workspace/stable-diffusion-webui-forge/models/insightface
cd /workspace/stable-diffusion-webui-forge/models/insightface
wget https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx
# Configure ReActor to use the GPU instead of CPU
echo "CUDA" > /workspace/stable-diffusion-webui-forge/extensions/sd-webui-reactor/last_device.txt
- Install the Serverless dependencies:
cd /workspace/stable-diffusion-webui-forge
pip3 install huggingface_hub runpod
- Download some models, for example
SDXL
andDeliberate v2
:
cd /workspace/stable-diffusion-webui-forge/models/Stable-diffusion
wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
wget https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors
wget -O deliberate_v2.safetensors https://huggingface.co/XpucT/Deliberate/resolve/main/Deliberate_v2.safetensors
- Download VAEs for SD 1.5 and SDXL:
cd /workspace/stable-diffusion-webui-forge/models/VAE
wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
wget https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl_vae.safetensors
- Download ControlNet models, for example
canny
for SD 1.5 as well as SDXL:
mkdir -p /workspace/stable-diffusion-webui-forge/models/ControlNet
cd /workspace/stable-diffusion-webui-forge/models/ControlNet
wget https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny.pth
wget https://huggingface.co/lllyasviel/sd_control_collection/resolve/main/diffusers_xl_canny_full.safetensors
- Download InstantID ControlNet models:
wget -O ip-adapter_instant_id_sdxl.bin "https://huggingface.co/InstantX/InstantID/resolve/main/ip-adapter.bin?download=true"
wget -O control_instant_id_sdxl.safetensors "https://huggingface.co/InstantX/InstantID/resolve/main/ControlNetModel/diffusion_pytorch_model.safetensors?download=true"
- Create logs directory:
mkdir -p /workspace/logs
- Install config files:
cd /workspace/stable-diffusion-webui-forge
rm webui-user.sh config.json ui-config.json
wget https://raw.githubusercontent.com/ashleykleynhans/runpod-worker-forge/main/webui-user.sh
wget https://raw.githubusercontent.com/ashleykleynhans/runpod-worker-forge/main/config.json
wget https://raw.githubusercontent.com/ashleykleynhans/runpod-worker-forge/main/ui-config.json
- Run the Web UI:
deactivate
export HF_HOME="/workspace"
cd /workspace/stable-diffusion-webui-forge
./webui.sh -f
- Wait for the Web UI to start up, and download the models. You shoud see something like this when it is ready:
Model loaded in 17.3s (calculate hash: 7.0s, forge load real models: 9.0s, calculate empty prompt: 1.0s).
- Press Ctrl-C to exit, and then you can terminate the pod.