Skip to content

UPSTREAM PR #1184: Feat: Select backend devices via arg#40

Open
loci-dev wants to merge 20 commits intomainfrom
loci/pr-1184-select-backend
Open

UPSTREAM PR #1184: Feat: Select backend devices via arg#40
loci-dev wants to merge 20 commits intomainfrom
loci/pr-1184-select-backend

Conversation

@loci-dev
Copy link

@loci-dev loci-dev commented Feb 2, 2026

Note

Source pull request: leejet/stable-diffusion.cpp#1184

The main goal of this PR is to improve user experience in multi-gpu setups, allowing to chose which model part gets sent to which device.

Cli changes:

  • Adds the --main-backend-device [device_name] argument to set the default backend
  • remove --clip-on-cpu, --vae-on-cpu and --control-net-cpu arguments
  • replace them respectively with the new --clip_backend_device [device_name], --vae-backend-device [device_name], --control-net-backend-device [device_name] arguments
  • add the --diffusion_backend_device (control the device used for the diffusion/flow models) and the --tae-backend-device
  • add --upscaler-backend-device, --photomaker-backend-device, and --vision-backend-device
  • add --list-devices argument to print the list of available ggml devices and exit.
  • add --rpc argument to connect to a compatible GGML rpc server

C API changes (stable-diffusion.h):

  • Change the content of the sd_ctx_params_t struct.
  • void list_backends_to_buffer(char* buffer, size_t buffer_size) to write the details of the available buffers to a null-terminated char array. Devices are separated by newline characters (\n), and the name and description of the device are separated by \t character.
  • size_t backend_list_size() to get the size of the buffer needed for void list_backends_to_buffer
  • void add_rpc_device(const char* address); connect to a ggml RPC backend (from llama.cpp)

The default device selection should now consistently prioritize discrete GPUs over iGPUs.

For example if you want to run the text encoders on CPU, you'd need to use --clip_backend_device CPU instead of --clip-on-cpu

TODO:

  • Fix bug with --lora-apply-mode immediately when clip and diffsion models are running on different (non-cpu) backends.
  • Clean up logs

Important: to use RPC, you need to add -DGGML_RPC=ON to the build. Additionally it requires either sd.cpp to be built with -DSD_USE_SYSTEM_GGML flag (I haven't tested that one), or the RPC server to be built with -DCMAKE_C_FLAGS="-DGGML_MAX_NAME=128" -DCMAKE_CXX_FLAGS="-DGGML_MAX_NAME=128" (default is 64)

Fixes #1116

@loci-dev loci-dev force-pushed the main branch 19 times, most recently from 052ebb0 to 76ede2c Compare February 3, 2026 10:20
@loci-dev loci-dev force-pushed the loci/pr-1184-select-backend branch from 29e8399 to 2d43513 Compare February 3, 2026 10:46
@loci-dev loci-dev temporarily deployed to stable-diffusion-cpp-prod February 3, 2026 10:46 — with GitHub Actions Inactive
@loci-review
Copy link

loci-review bot commented Feb 3, 2026

Overview

Analysis of stable-diffusion.cpp across 18 commits reveals minimal performance impact from multi-backend device management refactoring. Of 48,425 total functions, 124 were modified (0.26%), 331 added, and 109 removed. Power consumption increased negligibly: build.bin.sd-cli (+0.388%, 479,167→481,028 nJ) and build.bin.sd-server (+0.239%, 512,977→514,202 nJ).

Function Analysis

SDContextParams Constructor (both binaries): Response time increased ~40% (+2,816-2,840ns) due to initializing 9 new std::string device placement fields replacing 3 boolean flags. Enables per-component GPU/CPU device selection for heterogeneous computing.

SDContextParams Destructor (both binaries): Response time increased ~42% (+2,497-2,505ns) from destroying 9 additional string members. One-time cleanup cost outside inference paths.

~StableDiffusionGGML (both binaries): Throughput time increased ~95% (+192ns absolute) managing 7 backend types versus 3, including loop-based cleanup for multiple CLIP backends. Response time impact minimal (+5.2%, ~720ns).

ggml_e8m0_to_fp32_half (sd-cli): Response time improved 24% (-36ns), benefiting quantization operations called millions of times during inference.

Standard library functions (std::_Rb_tree::begin, std::vector::_S_max_size, std::swap): Showed 76-289% throughput increases due to template instantiation complexity, but absolute changes remain under 220ns in non-critical initialization paths.

Additional Findings

All performance regressions occur in initialization and cleanup phases, not inference hot paths. The architectural changes enable multi-GPU workload distribution, per-component device placement (diffusion, CLIP, VAE on separate devices), and runtime backend flexibility. Quantization improvements and multi-GPU capabilities provide net performance gains during actual inference, far exceeding the microsecond-level initialization overhead. Changes are well-justified architectural improvements with negligible real-world impact.

🔎 Full breakdown: Loci Inspector.
💬 Questions? Tag @loci-dev.

@loci-dev loci-dev force-pushed the main branch 7 times, most recently from 5bbc590 to 68f62a5 Compare February 8, 2026 04:51
@loci-dev loci-dev force-pushed the main branch 4 times, most recently from 3ad80c4 to 74d69ae Compare February 12, 2026 04:47
@loci-dev loci-dev force-pushed the loci/pr-1184-select-backend branch from 2d43513 to 3f62282 Compare February 17, 2026 04:18
@loci-dev loci-dev temporarily deployed to stable-diffusion-cpp-prod February 17, 2026 04:18 — with GitHub Actions Inactive
@loci-review
Copy link

loci-review bot commented Feb 17, 2026

Overview

Analysis of 20 commits implementing multi-backend GPU architecture refactoring across 48,713 functions. Modified 132 functions (0.27%), added 406, removed 63. Power consumption increased minimally: build.bin.sd-server +1.13% (515,491→521,302 nJ), build.bin.sd-cli +1.17% (480,110→485,727 nJ). Performance regressions concentrated in initialization and state-change operations; inference hot path unaffected.

Function Analysis

apply_loras_immediately (both binaries): Response time increased +199% (10.4ms→31.1ms, +20.7ms absolute). Throughput time increased +106-109% (+705-719ns). Changes fix critical bug enabling correct LoRA application across heterogeneous backends (e.g., diffusion on GPU, CLIP on CPU). Function now performs backend similarity checking, creates separate tensor filters per model component, and loads LoRAs independently for diffusion/CLIP/VAE backends. Called only during LoRA state changes, not per-inference, making 20ms overhead acceptable.

~StableDiffusionGGML (both binaries): Throughput time increased +95% (201ns→393ns, +192ns). Response time increased +5.2% (13.9μs→14.6μs, +722ns). Destructor now manages multiple backends via vector iteration (multiple CLIP encoders) plus specialized backends for diffusion, TAE, PhotoMaker, vision models. Enables proper multi-device resource cleanup.

~SDContextParams (both binaries): Response time increased +42% (5.9μs→8.4μs, +2.5μs). Replaced 3 boolean flags with 9 std::string members for granular per-component device specification, requiring heap deallocation overhead. Enables flexible multi-GPU device assignment.

Standard library functions showed mixed performance due to compiler/toolchain differences, not application code changes. Other analyzed functions saw negligible changes or improvements.

Additional Findings

Architectural refactoring enables critical capabilities: multi-GPU workload distribution, heterogeneous CPU/GPU computing, multi-encoder support (Flux models), and RPC distributed inference. Commit e511f77 fixed correctness bug in multi-backend LoRA application. Commit 3f62282 added sequential tensor loading for RPC backends. Performance regressions are isolated to non-critical initialization/cleanup code, with zero impact on per-image inference latency. The 1.1-1.2% power increase reflects initialization overhead amortized over many inferences. Trade-off strongly favors functionality over minimal performance cost.

🔎 Full breakdown: Loci Inspector.
💬 Questions? Tag @loci-dev.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments