Open
Conversation
Add offline and online inference drivers with Dockerfile and Anyscale job configs for running SGLang on Ray.
…stness - Dockerfile: use sglang[all]==0.5.8 + sgl-kernel==0.3.21 instead of git fork - Drivers: add logging, named placement groups, exit codes, better error handling - Job configs: add NCCL_DEBUG, fix submit path comment - README: add How It Works, Troubleshooting, local run examples Co-Authored-By: Claude Opus 4.6 <[email protected]>
Co-Authored-By: Claude Opus 4.6 <[email protected]>
- Rename sglang_ray_inference -> sglang_inference - Batch inference (job.yaml + driver_offline.py) fully working with multi-node TP=4, PP=2 using SGLang's use_ray=True mode - Ray Serve deployment (service.yaml + serve.py) uses same pattern as official Ray LLM SGLang integration with signal monkey-patching - Add query.py script for testing the service - Simplify configuration with environment variables The serving example is still being validated with multi-replica autoscaling. Single replica works; investigating occasional timeouts with multiple replicas. Co-Authored-By: Claude Opus 4.5 <[email protected]> Signed-off-by: Robert Nishihara <[email protected]>
xyuzh
commented
Mar 8, 2026
| worker_nodes: | ||
| - instance_type: g5.12xlarge # 4x A10G | ||
| min_nodes: 4 | ||
| max_nodes: 8 |
Contributor
Author
There was a problem hiding this comment.
I see the min_nodes max_nodes settings are different for offline and serve config, is there a reason for this?
Contributor
There was a problem hiding this comment.
Your fix is correct. 2 and 8 is right, since the replicas autoscale from 1-4.
xyuzh
commented
Mar 8, 2026
sglang_inference/job.yaml
Outdated
| instance_type: m5.2xlarge # CPU-only head | ||
| worker_nodes: | ||
| - instance_type: g5.12xlarge # 4x A10G | ||
| min_nodes: 4 |
Contributor
Author
There was a problem hiding this comment.
shouldn't we only need 2 nodes here?
xyuzh
commented
Mar 8, 2026
sglang_inference/service.yaml
Outdated
| max_nodes: 8 | ||
|
|
||
| env_vars: | ||
| MODEL_PATH: "Qwen/Qwen3-1.7B" |
Contributor
Author
There was a problem hiding this comment.
have you succeeded with the 30B model
- Switch from Engine to sglang.srt.ray.engine.RayEngine - Upgrade base image to ray 2.54.0 and install from sglang main branch - Update default model to Qwen3.5-27B - Add threaded engine init with warmup in serve.py to avoid event loop conflicts - Fix node counts in job/service configs (4 -> 2 worker nodes)
- Remove threading and signal monkey-patching from serve.py; use RayEngine directly with async_generate - Add Dockerfile step to patch ray.serve replica.py with two-phase init support (compatible with Ray 2.54.0) - Improve logging setup to avoid duplicate handlers
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add offline and online inference drivers with Dockerfile and Anyscale job configs for running SGLang on Ray.