feat(proxy): tunnel WebSocket upgrades through path-prefix forwarder #30
No reviewers
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_proxy!30
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "feat/ws-upgrade-tunnel"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Closes #29.
Summary
Adds explicit WebSocket-upgrade tunnelling to
proxy_handler's path-prefix forwarding path. Without this, browsers reconnect-loop on every WS connection through the proxy because the 101 reaches the client but no frames flow afterwards (proxy stays in HTTP/1.1 mode while the client flips to WS).Mirrors
hero_router::server::routes::proxy_ws_tunnel— same primitives, same patterns — but targets a TCP upstream (the router) instead of a Unix socket.Changes
crates/hero_proxy_server/Cargo.tomlhttp-body-util = "0.1",bytes = "1"Cargo.lockcrates/hero_proxy_server/src/proxy.rsis_ws_upgrade(headers)+forward_ws_to_upstream(upstream_url, req)(~200 LOC), and a 4-line branch inproxy_handlerthat short-circuits to the new helper before the existingforward_to_upstreamcallNet: +210 lines, 0 removals. Existing non-WS code paths are untouched.
How it works
Test plan
cargo build --release -p hero_proxy_servercleanUpgrade: websocketheaders via:9997(proxy) returns the sameSec-WebSocket-Acceptas via:9988(router direct), and presence-event payload streams back through the proxy path post-fixGET /hero_collab/ui200,POST /hero_collab/rpc/rpcJSON) still works — non-WS path unchangedOut of scope (follow-up issues recommended)
dispatch_domain_routestill uses plainforward_to_upstream; a UDS-targeted WS tunnel variant would be needed if any domain route ever fronts a WS service.vary/ CORS headers (both router and proxy add them via their respectivetower-http::cors::CorsLayerchains). Functionally harmless; can be cleaned up by stripping the headers the proxy is about to add. ~5 LOC.Notes
client_upgradefuture is captured BEFOREreq.into_parts()consumes the request — moving this call later breaks the upgrade.with_upgrades()is required — without it, hyper closes the connection after the response body (which doesn't exist for 101) drains.The plain HTTP forwarder used by `proxy_handler` (`forward_to_upstream`) is request/response only — it does not wire up the upgraded socket after a `101 Switching Protocols`. Without this branch, browser WS connections through the proxy reconnect-loop: the 101 reaches the browser, no frames flow afterwards, the connection drops, the client reconnects every ~1s. Adds two helpers in `proxy.rs`: * `is_ws_upgrade(headers)` — detects the standard `Connection: upgrade` + `Upgrade: websocket` pair. * `forward_ws_to_upstream(upstream_url, req)` — TCP-targeted version of hero_router's `proxy_ws_tunnel`. Grabs the client-side `hyper::upgrade::on(&mut req)`, opens TCP to the router, does the HTTP/1.1 handshake `with_upgrades()`, sends the upgrade request with all original headers preserved (Host overridden), and once the upstream returns 101 splices the two upgrade IOs with `tokio::io::copy_bidirectional`. `proxy_handler` short-circuits to the new helper before `forward_to_upstream` whenever `is_ws_upgrade` matches. Out of scope (separate follow-up): `dispatch_domain_route` (the host-matched path with bearer/OAuth/signature/IP auth modes) still uses regular HTTP forwarding. No domain route fronts a WebSocket service today, so this isn't currently load-bearing. Cosmetic: the proxy response now duplicates `vary` and a few CORS headers (both router and proxy add them). Functionally harmless. Adds two small deps used by the helper: * http-body-util = "0.1" (Empty<Bytes> for the upgrade-request body) * bytes = "1" (already a transitive dep of hyper)