Conversation
Move spec-like documents from root and docs/internal into a structured layout under docs/superpowers with timestamped filenames: - specs/: active design documents (SSC PRD, technical spec, EdgeZero migration, auction orchestration flow, production readiness report) - archive/: completed or historical specs (optimization, sequence, publisher IDs audit)
Design spec for streaming HTTP responses through the publisher proxy when Next.js is disabled. Covers two implementation steps: 1. Make the streaming pipeline chunk-emitting (HtmlRewriterAdapter, gzip, encoder finalization) 2. Wire up Fastly StreamingBody via stream_to_client() with entry point migration from #[fastly::main] to undecorated main() Includes streaming gate logic, error handling, rollback strategy, and testing plan. Verified against Fastly SDK 0.11.12 API.
Add before/after measurement protocol using Chrome DevTools MCP tools: network timing capture, Lighthouse audits, performance traces, memory snapshots, and automated body hash comparison for correctness.
…ation test - Add debug-level logging to process_chunks showing total bytes read and written per pipeline invocation - Add brotli-to-brotli round-trip test to cover the into_inner() finalization path - Add test proving HtmlWithPostProcessing accumulates output when post-processors are registered while streaming path passes through
- Group std imports together (cell, io, rc) before external crates - Document supported compression combinations on PipelineConfig - Remove dead-weight byte counters from process_chunks hot loop - Fix stale comment referencing removed process_through_compression - Fix brotli finalization: use drop(encoder) instead of into_inner() to make the intent clear (CompressorWriter writes trailer on drop) - Document reset() as no-op on HtmlRewriterAdapter (single-use) - Add brotli round-trip test covering into_inner finalization path - Add gzip HTML rewriter pipeline test (compressed round-trip with lol_html, not just StreamingReplacer) - Add HtmlWithPostProcessing accumulation vs streaming behavior test
- Add Eq derive to Compression enum (all unit variants, trivially correct) - Brotli finalization: use into_inner() instead of drop() to skip redundant flush and make finalization explicit - Document process_chunks flush semantics: callers must still call encoder-specific finalization after this method returns - Warn when HtmlRewriterAdapter receives data after finalization (rewriter already consumed, data would be silently lost) - Make HtmlWithPostProcessing::reset() a true no-op matching its doc (clearing auxiliary state without resetting rewriter is inconsistent) - Document extra copying overhead on post-processor path vs streaming - Assert output content in reset-then-finalize test (was discarded) - Relax per-chunk emission test to not depend on lol_html internal buffering behavior — assert concatenated correctness + at least one intermediate chunk emitted
…ation' into feature/streaming-pipeline-phase2
Accumulate text fragments via Mutex<String> until last_in_text_node is true, then process the complete text. Intermediate fragments return RemoveNode to suppress output.
Accumulate text fragments via Mutex<String> until last_in_text_node is true, then match and rewrite on the complete text. Non-GTM scripts that were fragmented are emitted unchanged.
All script rewriters (NextJS __NEXT_DATA__, GTM) are now fragment-safe — they accumulate text internally until last_in_text_node. The buffered adapter workaround is no longer needed. Always use streaming mode in create_html_processor.
When rewrite_structured returns Keep on accumulated content, intermediate fragments were already removed via RemoveNode. Emit the full accumulated content via Replace to prevent silent data loss. Also updates spec to reflect Phase 3 completion.
- Add response.get_status().is_success() check to streaming gate so 4xx/5xx error pages stay buffered with complete status codes - Add streaming gate unit tests covering all gate conditions - Add stream_publisher_body gzip round-trip test - Add small-chunk (32 byte) pipeline tests for __NEXT_DATA__ and GTM that prove fragmented text nodes survive the real lol_html path
Phase 3 performance results: 35% TTFB improvement, 37% DOM Complete improvement on getpurpose.ai staging vs production. Phase 4 adds binary pass-through streaming via PublisherResponse::PassThrough.
- Extract streaming gate into can_stream_response() function so tests call production code instead of reimplementing the formula - Refactor GTM rewrite() to use Option<String> pattern instead of uninit variable, replacing indirect text.len() != content.len() accumulation check with explicit full_content.is_some() - Add cross-element safety doc comment on accumulated_text fields in GTM and NextJsNextDataRewriter - Document RSC placeholder deliberate non-accumulation strategy - Update spec to reflect script rewriters are now fragment-safe
- Document why Mutex<String> is used (Sync bound on trait, not concurrent access) in both NextJsNextDataRewriter and GoogleTagManagerIntegration - Add accumulation_buffer_drains_between_consecutive_script_elements test proving the buffer doesn't leak between two sequential <script> elements (fragmented GTM followed by fragmented non-GTM)
Non-processable 2xx responses (images, fonts, video) now stream directly to the client via PublisherResponse::PassThrough instead of buffering the entire body in memory. Content-Length is preserved since the body is unmodified.
Tests verify non-processable 2xx responses return PassThrough, non-processable errors stay Buffered, and processable content goes through Stream (not PassThrough).
Adds pass_through_preserves_body_and_content_length test that verifies io::copy produces identical output and Content-Length is preserved. Updates handle_publisher_request doc to describe all three response variants.
- Exclude 204 No Content from PassThrough (must not have body) - Remove Content-Length before streaming (stream_to_client uses chunked encoding, keeping both violates HTTP spec) - Add tests for 204 exclusion and empty-host interaction - Update doc comment and byte-level test to reflect CL removal
PassThrough reattaches the unmodified body and uses send_to_client() instead of stream_to_client() + io::copy. This preserves Content-Length (avoids chunked encoding overhead for images/fonts) and lets Fastly stream from its internal buffer without WASM memory buffering.
- Fix PassThrough doc comment operation order (set_body before finalize) - Update function doc to describe actual PassThrough flow (set_body + send_to_client, not io::copy) - Remove dead _enters_early_return variable, replace with comment
When the origin returns a processable 2xx response with an encoding the pipeline cannot decompress (e.g. `zstd` from a misbehaving origin), the buffered fallback previously still routed the body through process_response_streaming. `Compression::from_content_encoding` maps unknown values to `None`, so the rewriter would treat the compressed bytes as identity-encoded text and emit garbled output. Bypass the rewrite pipeline entirely in that case and return the origin response untouched. Adds a test asserting byte-for-byte pass-through and updates the is_supported_content_encoding doc to reflect the new behavior. Addresses PR #585 review feedback from @prk-Jr.
…eaming-pipeline-phase2 # Conflicts: # crates/trusted-server-core/src/publisher.rs
- Clarify PassThrough variant doc: finalize_response() and send_to_client() are applied at the outer dispatch level, not in this arm - Hoist status outside the early-return block and reuse is_success to eliminate the duplicate get_status() call - Exclude 205 Reset Content alongside 204 No Content per RFC 9110 §15.3.6; add pass_through_gate_excludes_205_reset_content test - Log binary pass-through before returning to aid production tracing
| /// No-op. `HtmlRewriterAdapter` is single-use: the rewriter consumes its | ||
| /// [`Settings`](lol_html::Settings) on construction and cannot be recreated. | ||
| /// Calling [`process_chunk`](StreamProcessor::process_chunk) after | ||
| /// [`process_chunk`](StreamProcessor::process_chunk) with `is_last = true` |
There was a problem hiding this comment.
The doc comment has a copy-paste error on the second intra-doc link:
/// Calling [`process_chunk`](StreamProcessor::process_chunk) after
/// [`process_chunk`](StreamProcessor::process_chunk) with `is_last = true`
The second occurrence should describe what happens before the call, not repeat the same method name. Something like:
/// Calling [`process_chunk`](StreamProcessor::process_chunk) after
/// [`reset`](StreamProcessor::reset) or after `is_last = true` will produce empty output.
Or simpler:
/// Calling [`process_chunk`](StreamProcessor::process_chunk) after finalization
/// (`is_last = true`) will produce empty output — the rewriter is already done.
| assert!( | ||
| encoding_supported && (!is_html || !has_post_processors), | ||
| "should stream 2xx HTML without post-processors" | ||
| ); |
There was a problem hiding this comment.
These gate tests validate boolean expressions over local variables, not the actual can_stream logic in handle_publisher_request. If the gate formula in the function changes (e.g. a condition is added or the boolean is restructured), these tests will still pass.
Consider replacing the boolean-expression tests with an integration-level test that invokes handle_publisher_request with a real response and asserts which variant of PublisherResponse is returned. That way the tests actually exercise the gate, not just a re-statement of its formula.
The tests for 204/205 exclusion in the PassThrough gate are similarly speculative — they replay the conditional logic rather than asserting on the function's observable output.
| .unwrap_or_default() | ||
| .to_lowercase(); | ||
| if !should_process || request_host.is_empty() || !is_success { | ||
| log::debug!( |
There was a problem hiding this comment.
When should_process is true and request_host.is_empty() is true, the code enters this block and falls through to Buffered(response) at line 503 — silently returning the origin's processable content (HTML, JS, CSS) unmodified without URL rewriting. This was the same behavior before, but a log::warn! would make misconfiguration (missing Host header) visible in logs. The current log::debug! means a mis-proxied page with unrewritten URLs would be invisible at production log levels.
ChristianPavilonis
left a comment
There was a problem hiding this comment.
Second pass review. Previous comments were addressed — has_html_post_processors() is implemented.
Previous comments resolved: Both prior inline comments are addressed — the has_html_post_processors() optimization is present in registry.rs and the streaming plan doc was updated.
New findings:
-
Doc comment copy-paste in
HtmlRewriterAdapter::reset()(commented inline). Minor, but renders incorrect in rustdoc. -
Gate tests validate boolean expressions, not function behavior (commented inline). The
streaming_gate_*andpass_through_gate_*tests reproduce the conditional logic fromhandle_publisher_requestas local variables and assert on those variables. They pass even if the gate in the actual function is changed. Theunsupported_encoding_response_is_returned_unmodifiedtest (which uses a realResponse) is the right model — the gate tests should follow that pattern usinghandle_publisher_requestreturning aPublisherResponsevariant. -
Missing WARN log when processable content is skipped due to empty request host (commented inline). The combined condition
!should_process || request_host.is_empty() || !is_successlogs at DEBUG for all three cases. An emptyrequest_hostwith processable content is a misconfiguration that silently returns unrewritten HTML/JS to the client. Worth a separate warn branch.
No blocking issues. The core streaming architecture — chunk-emitting HtmlRewriterAdapter, unified process_chunks loop, PublisherResponse enum, and the pass-through/stream/buffered routing in main.rs — is sound. The Mutex accumulation buffers in GTM and NextJS rewriters are correctly drained on the final fragment path. The brotli encoder finalization via flush() inside process_chunks followed by into_inner() is correct. Content-Length handling is consistent across all three response paths.
ChristianPavilonis
left a comment
There was a problem hiding this comment.
Findings from second pass are non-blocking — approving.
Summary
StreamingBodyhtml_post_processors()is empty (Next.js disabled)stream_to_client,StreamingBody,Request::from_client)Stacked on #560 (docs reorganization).
Closes #565
Test plan
fastly::init,StreamingBodyimplementsWrite, abort-on-drop)