24 Commits

Author SHA1 Message Date
Siavash Sameni
6f4e8eb9f6 fix: URL-based room routing — /manwe serves index.html with room pre-filled
ServeDir now falls back to index.html for unknown paths (SPA routing).
https://host:port/manwe loads the page with room input pre-filled as "manwe".
JS getRoom() already reads the path, now the page actually loads.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 15:51:47 +04:00
Siavash Sameni
634cd40fdc fix: web bridge low-latency config — disable silence suppression, reduce jitter buffer
PTT mode was causing delayed/lost audio because:
1. Silence suppression ate the start of speech after PTT release
2. Jitter buffer target depth was too high for interactive use

Web bridge now uses:
- suppression_enabled: false (PTT handles silence at browser level)
- jitter_target: 3 (60ms vs ~1s default)
- jitter_max: 20 (400ms cap)
- jitter_min: 1 (start playing after 20ms)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 15:31:23 +04:00
Siavash Sameni
6310864b0b fix: client sends Hangup before disconnect, relay handles timeouts gracefully
Client: sends SignalMessage::Hangup(Normal) before closing in all modes
(send-tone, file mode, silence mode) so the relay knows the session ended.

Relay: downgrades "timed out" / "reset" / "closed" recv errors from
ERROR to INFO since these are normal disconnect scenarios.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 15:15:47 +04:00
Siavash Sameni
4d2c9838c5 fix: eliminate all compiler warnings across client, relay, web
- Remove unused imports in featherchat.rs (tracing, QualityProfile)
- Remove unused comfort_noise field from CallEncoder (cn_level is used instead)
- Prefix unused _metrics_file in CliArgs
- Prefix unused _addr in Participant
- Remove unused RoomSlot struct and rooms field from web AppState
- Remove unused HashMap import from web main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 15:13:48 +04:00
Siavash Sameni
ab8a7f7a96 fix: client exits after --send-tone completes (was hanging on recv task)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 15:04:44 +04:00
Siavash Sameni
59268f0391 fix: add libssl-dev to Linux build deps (openssl-sys needs it)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 15:00:20 +04:00
Siavash Sameni
a833694568 refactor: build-linux.sh — persistent VM with --prepare/--build/--transfer steps
Replaces the single-shot ephemeral VM approach:
- --prepare: create VM, install deps (Rust, cmake, etc), upload source
- --build: build on VM with full output (iterate on errors)
- --transfer: download binaries to target/linux-x86_64/
- --destroy: delete VM when done
- --upload: re-upload source to existing VM
- --all: prepare + build + transfer (VM persists)

VM reuse: --prepare detects existing wzp-builder VM and just re-uploads.
All steps get VM IP from hcloud server list (last created).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 14:48:51 +04:00
Siavash Sameni
6d5ee55393 fix: install rustls crypto provider in wzp-client (same as relay/web)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 14:45:26 +04:00
Siavash Sameni
0dc381e948 feat: protocol improvements — live trunking, mini-frames, noise suppression, adaptive jitter
T6 wiring: Trunking in relay hot path
- TrunkedForwarder wraps transport with TrunkBatcher
- run_participant uses 5ms flush timer when trunking enabled
- send_trunk/recv_trunk on QuinnTransport
- --trunking flag on relay config
- 2 new tests: forwarder batches, auto-flush on full

T7 wiring: Mini-frames in encoder/decoder
- MediaPacket::encode_compact/decode_compact with MiniFrameContext
- CallEncoder sends mini-headers for consecutive frames (full every 50th)
- CallDecoder auto-detects full vs mini on receive
- mini_frames_enabled in CallConfig (default true)
- 3 new tests: encode/decode sequence, periodic full, disabled mode

Noise suppression (nnnoiseless/RNNoise)
- NoiseSupressor in wzp-codec: pure Rust ML-based noise removal
- Processes 960-sample frames as two 480-sample halves
- Integrated in CallEncoder before silence detection
- noise_suppression in CallConfig (default true)
- 4 new tests: creation, processing, SNR improvement, passthrough

T1-S4: Adaptive playout delay
- AdaptivePlayoutDelay: EMA-based jitter tracking (NetEq-inspired)
- Computes target_delay from observed inter-arrival jitter
- JitterBuffer::new_adaptive() uses adaptive delay
- adaptive_jitter in CallConfig (default true)
- 5 new tests: stable, jitter increase, recovery, clamping, estimate

272 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 14:24:53 +04:00
Siavash Sameni
34cd1017c1 feat: IAX2-inspired protocol improvements — trunking, mini-frames, silence suppression, call control (P2-T6/T7/T8/T9)
WZP-P2-T6: Trunking
- TrunkFrame/TrunkEntry: pack N session packets into one datagram
- Wire format: [count:u16][session_id:2][len:u16][payload]...
- TrunkBatcher: batches by count (10) or bytes (1200), flushes on limit
- 5 tests: encode/decode roundtrip, empty frame, batcher fill/flush, byte limit

WZP-P2-T7: Mini-frames
- MiniHeader: 4-byte delta header (timestamp_delta + payload_len)
- FRAME_TYPE_FULL (0x00) / FRAME_TYPE_MINI (0x01) discriminator
- MiniFrameContext: expands mini-headers to full by tracking baseline
- Saves 8 bytes per packet (5 vs 13 bytes with type prefix)
- 5 tests: encode/decode, wire size, context expand, no baseline, size comparison

WZP-P2-T8: Silence suppression
- SilenceDetector: RMS-based detection with hangover (5 frames = 100ms)
- ComfortNoise: low-level random noise generator
- CodecId::ComfortNoise variant for CN packets
- CallEncoder: suppresses silent frames, sends 1-byte CN every 200ms
- CallDecoder: generates comfort noise on CN packets
- ~50% bandwidth savings in typical conversations
- 6 tests: silence/speech detection, hangover, CN generation, RMS math, suppression

WZP-P2-T9: Call control signals
- SignalMessage: Hold, Unhold, Mute, Unmute, Transfer, TransferAck
- CallSignalType mapping in featherchat.rs for all new variants
- 4 serde roundtrip tests + signal type mapping tests

255 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 14:13:05 +04:00
Siavash Sameni
a64b79d953 feat: probe mesh mode + Grafana dashboard (T5-S6/S7) — completes T5
WZP-P2-T5-S6: Probe mesh mode
- ProbeMesh coordinator: wraps multiple ProbeRunners, spawns all concurrently
- mesh_summary(): scans registry, formats human-readable health table
- /mesh HTTP endpoint on metrics port alongside /metrics
- --probe-mesh flag, --mesh-status for CLI diagnostics
- Replaces individual probe spawn loop with ProbeMesh::run_all()
- 4 tests: mesh creation, empty/populated summary, zero targets

WZP-P2-T5-S7: Grafana dashboard
- docs/grafana-dashboard.json — importable directly into Grafana
- Row 1: Relay Health (sessions, rooms, packets/s, bytes/s, auth, handshake)
- Row 2: Call Quality (buffer depth, loss%, RTT, underruns per session)
- Row 3: Inter-Relay Mesh (RTT heatmap, loss, jitter, probe up/down)
- Row 4: Web Bridge (connections, frames bridged, auth failures, latency)
- Datasource variable ${DS_PROMETHEUS}, auto-refresh 10s
- Color thresholds: loss 2%/5%, RTT 100ms/300ms, probe up=green/down=red

T5 Telemetry & Observability is now COMPLETE (all 7 subtasks).
235 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 13:18:50 +04:00
Siavash Sameni
216ebf4a25 feat: per-session metrics + inter-relay health probe (T5-S2/S5)
WZP-P2-T5-S2: Per-session Prometheus metrics
- 5 new per-session gauges/counters: buffer_depth, loss_pct, rtt_ms,
  underruns, overruns — all labeled by session_id
- update_session_quality() reads QualityReport from packet headers
- update_session_buffer() tracks jitter buffer state per session
- remove_session_metrics() cleans up labels on disconnect
- Delta-aware counter increments avoid double-counting
- 2 tests: session_quality_update, session_metrics_cleanup

WZP-P2-T5-S5: Inter-relay health probe
- New probe.rs: ProbeConfig, ProbeMetrics, SlidingWindow, ProbeRunner
- --probe <addr> flag (repeatable) spawns background probe per target
- Sends Ping/s over QUIC, receives Pong, computes RTT/loss/jitter
- SlidingWindow(60): tracks last 60 pings, loss = missed pongs,
  jitter = std deviation of RTT
- Prometheus gauges: wzp_probe_rtt_ms, loss_pct, jitter_ms, up
  with target label
- Probe connections use SNI "_probe" — relay responds with Pong loop,
  skipping auth/handshake
- Auto-reconnect with 5s backoff on disconnect
- 6 tests: metrics_register, rtt/loss/jitter calculation,
  window eviction, empty edge cases

231 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 13:09:52 +04:00
Siavash Sameni
39f6908478 feat: Prometheus metrics on relay + web bridge, client JSONL export (T5-S1/S3/S4)
WZP-P2-T5-S1: Relay Prometheus /metrics
- RelayMetrics: active_sessions, active_rooms, packets/bytes_forwarded,
  auth_attempts (ok/fail), handshake_duration histogram
- --metrics-port flag spawns HTTP server
- Wired into auth, handshake, session, and packet forwarding paths
- 2 tests

WZP-P2-T5-S3: Web bridge Prometheus /metrics
- WebMetrics: active_connections, frames_bridged (up/down),
  auth_failures, handshake_latency histogram
- Added /metrics route to existing axum app
- Wired into WS connect/disconnect, auth, handshake, send/recv loops
- 2 tests

WZP-P2-T5-S4: Client --metrics-file JSONL
- ClientMetricsSnapshot with all telemetry fields
- MetricsWriter: writes one JSON line per second to file
- snapshot_from_stats() converts JitterStats to snapshot
- --metrics-file <path> flag
- 3 tests

223 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 12:44:57 +04:00
Siavash Sameni
3f813cd510 docs: telemetry & observability design — Prometheus, probes, Grafana
WZP-P2-T5 task breakdown with 7 subtasks:
- S1/S3: Prometheus /metrics on relay and web bridge
- S2: Per-session jitter/loss/RTT metrics
- S4: Client --metrics-file JSONL export
- S5/S6: Inter-relay health probes + mesh mode
- S7: Pre-built Grafana dashboard

Key design: multiplexed test lines between relays (~50 bytes/s)
provide continuous RTT/loss/jitter without meaningful BW cost.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 10:29:17 +04:00
Siavash Sameni
59a00d371b feat: jitter buffer instrumentation — drift test, telemetry, parameter sweep
WZP-P2-T1-S1: Automated drift measurement
- New drift_test.rs: DriftTestConfig, DriftResult, run_drift_test()
- CLI --drift-test <secs>: sends tone, measures actual vs expected duration
- Interpretation tiers: EXCELLENT (<50ms) / GOOD / FAIR / POOR
- 2 unit tests: drift math verification, config defaults

WZP-P2-T1-S2: Jitter buffer telemetry
- JitterStats gains: total_decoded, underruns, overruns, max_depth_seen
- JitterBuffer: record_underrun(), record_decode(), reset_stats()
- CallDecoder: stats() getter, reset_stats()
- JitterTelemetry: periodic tracing::info! logger with configurable interval
- 4 unit tests: ingestion tracking, underrun tracking, reset, interval

WZP-P2-T1-S3: Parameter sweep
- New sweep.rs: SweepConfig, SweepResult, run_local_sweep()
- Tests 20 jitter buffer configs (5 target × 4 max depths) locally
- CLI --sweep: runs sweep, prints ASCII comparison table
- No network needed — pure encoder→decoder pipeline test
- 3 unit tests: config defaults, local sweep runs, table formatting

216 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 10:26:40 +04:00
Siavash Sameni
524d1145bb feat: complete WZP Phase 2 (T2/T3/T4) — adaptive quality, AudioWorklet, sessions
WZP-P2-T2: Adaptive quality switching
- QualityAdapter with sliding window of QualityReports
- Hysteresis: 3 consecutive reports before switching profiles
- Thresholds: loss>15%/rtt>200ms→CATASTROPHIC, loss>5%/rtt>100ms→DEGRADED
- CallConfig::from_profile() constructor
- 5 unit tests: good/degraded/catastrophic conditions, hysteresis, recovery

WZP-P2-T3: AudioWorklet migration (web bridge)
- audio-processor.js: WZPCaptureProcessor + WZPPlaybackProcessor
- Capture: buffers 128-sample AudioWorklet blocks → 960-sample frames
- Playback: ring buffer, Int16→Float32 conversion in worklet
- ScriptProcessorNode fallback if AudioWorklet unavailable
- Existing UI preserved (connect, room, PTT)

WZP-P2-T4: Concurrent session management (relay)
- SessionManager tracks active sessions with HashMap
- Enforces max_sessions limit from RelayConfig
- create_session/remove_session lifecycle
- Wired into relay main: session created after auth+handshake,
  cleaned up after run_participant returns
- 7 unit tests: create/remove, max enforced, room tracking, info, expiry

207 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 10:20:51 +04:00
Siavash Sameni
bf56d84ef0 test: 17 new tests for S-4/5/6/7/9 integration tasks
S-4 Room hashing + ACL (8 tests in featherchat_compat.rs):
- hash_room_name: deterministic, 32 hex chars, different inputs differ
- hash_room_name_matches_fc_convention: manual SHA-256 verification
- room_acl: open mode, enforced mode, allow-listed, deny-unlisted

S-5 Handshake integration (4 tests in handshake_integration.rs):
- handshake_succeeds: real QUIC, encrypt/decrypt cross-verified
- handshake_verifies_identity: different seeds, session still works
- auth_then_handshake: AuthToken + CallOffer/Answer in sequence
- handshake_rejects_bad_signature: tampered sig → error

S-6/7/9 Web+Proto+TLS (5 tests in featherchat_compat.rs):
- auth_response_with_eth_address: FC's extra field handled
- wzp_proto_has_auth_token_variant: serialize/deserialize roundtrip
- all_fc_call_signal_types_representable: all 7 types verified
- hash_room_name_used_as_sni_is_valid: unicode/special chars → valid hex
- wzp_proto_cargo_toml_is_standalone: no workspace inheritance

196 total tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 10:09:34 +04:00
Siavash Sameni
59069bfba2 feat: complete all WZP-S integration tasks (S-4/5/6/7/9)
WZP-S-4: Room access control
- hash_room_name() in wzp-crypto: SHA-256("featherchat-group:"+name)[:16]
- CLI --room flag hashes before SNI, web bridge does the same
- RoomManager gains ACL: with_acl(), allow(), is_authorized()
- join() returns Result, rejects unauthorized fingerprints

WZP-S-5: Crypto handshake wired into all live paths
- CLI: perform_handshake() after connect, before any mode
- Relay: accept_handshake() after auth, before room join
- Web bridge: perform_handshake() after auth, before audio
- Relay generates ephemeral identity at startup

WZP-S-6: Web bridge featherChat auth
- --auth-url flag: browsers send {"type":"auth","token":"..."} as first WS msg
- Validates against featherChat, passes token to relay
- --cert/--key flags for production TLS (replaces self-signed)

WZP-S-7: wzp-proto standalone
- Cargo.toml uses explicit versions (no workspace inheritance)
- FC can use as git dependency

WZP-S-9: All 6 hardcoded assumptions resolved
- Auth, hashed rooms, mandatory handshake, real TLS certs,
  profile negotiation, token validation

CLI also gains --room and --token flags.
179 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 09:59:05 +04:00
Siavash Sameni
26dc848081 test: 15 cross-project integration tests — WZP ↔ featherChat verified
Identity (6 tests):
- Same seed → same Ed25519/X25519 keys, same fingerprint, same display
- Random seed, raw HKDF output verified

BIP39 Mnemonic (3 tests):
- Roundtrip both directions, identical strings

CallSignal Interop (4 tests):
- Offer/Answer/Hangup roundtrip through FC bincode serialization
- Signal type mapping verified

Auth Contract (2 tests):
- Request/response shapes match between WZP and FC

Uses warzone-protocol v0.0.21 as real dependency.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 09:39:04 +04:00
Siavash Sameni
ad16ddb903 feat: WZP-S-2 relay auth + WZP-S-3 featherChat signaling bridge
WZP-S-2: Relay token authentication
- New --auth-url flag: relay calls POST {url} with bearer token
- Clients must send SignalMessage::AuthToken as first signal
- Relay validates against featherChat's /v1/auth/validate endpoint
- Rejects unauthenticated clients before they join rooms
- New auth.rs module with validate_token() + tests

WZP-S-3: featherChat signaling bridge
- New featherchat.rs module for CallSignal interop
- WzpCallPayload: wraps SignalMessage + relay_addr + room name
- encode_call_payload/decode_call_payload for JSON serialization
- CallSignalType enum mirrors featherChat's variant
- signal_to_call_type maps WZP signals to FC types

Protocol: Added SignalMessage::AuthToken { token } variant

129 tests passing across all crates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 09:23:46 +04:00
Siavash Sameni
d870c9e08a docs: mark WZP-FC-1 and WZP-FC-4 as DONE (featherChat v0.0.21)
featherChat commit 064a730 implements:
- CallSignal WireMessage variant with Offer/Answer/ICE/Hangup/Reject/Ringing/Busy
- POST /v1/auth/validate endpoint returning fingerprint + alias

WZP can now:
- Send SignalMessage as JSON in CallSignal.payload through FC's E2E channel
- Verify FC bearer tokens on the relay via the validate endpoint

Next: WZP-S-2 (relay auth) and WZP-S-3 (signaling bridge in client)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 09:16:52 +04:00
Siavash Sameni
616505e8a9 docs: shared crate strategy for WZP ↔ featherChat interop
Defines 5 tasks (FC-CRATE-1/2/3, WZP-CRATE-1/2) to make both
projects' crates importable by each other:

featherChat side:
- FC-CRATE-1: Make warzone-protocol standalone (replace workspace deps)
- FC-CRATE-2: Add CallSignal variant using wzp-proto types
- FC-CRATE-3: Extract warzone-identity micro-crate (optional)

WZP side (after FC-CRATE-1):
- WZP-CRATE-1: Replace identity mirror with real warzone-protocol dep
- WZP-CRATE-2: Verify wzp-proto works as git dep from featherChat

Priority: FC-CRATE-1 first (30 min, unblocks everything).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 09:14:25 +04:00
Siavash Sameni
12cdfe6c8a feat: featherChat-compatible identity — seed, mnemonic, fingerprint
New identity module (wzp-crypto/src/identity.rs) mirrors featherChat's
warzone-protocol identity.rs exactly:
- Seed: 32 bytes, from hex or BIP39 mnemonic (24 words)
- HKDF derivation: same salt (None), same info strings
- Fingerprint: SHA-256(Ed25519 pub)[:16], same xxxx:xxxx format
- Cross-verified: test proves identity module matches KeyExchange trait

CLI flags:
- --seed <64 hex chars>: use a specific identity
- --mnemonic <24 words>: use BIP39 mnemonic from featherChat
- Without either: generates ephemeral identity

Also adds featherChat as git submodule at deps/featherchat for reference.

32 crypto tests passing (27 original + 5 identity tests).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 09:09:38 +04:00
Siavash Sameni
97402f6e60 docs: integration task tracker from featherChat commit 65f6390
Maps all WZP-S-* (our side) and WZP-FC-* (featherChat side) tasks
with status tracking and priority order.

Key findings:
- WZP-S-1 (HKDF alignment): DONE — both use None salt, info strings match
- WZP-S-9: 6 hardcoded assumptions documented for fixing
- Priority: identity test → CLI seed → CallSignal variant → auth → handshake

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 08:56:26 +04:00
50 changed files with 10063 additions and 280 deletions

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "deps/featherchat"]
path = deps/featherchat
url = ssh://git@git.manko.yoga:222/manawenuz/featherChat.git

1558
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -51,3 +51,4 @@ wzp-codec = { path = "crates/wzp-codec" }
wzp-fec = { path = "crates/wzp-fec" } wzp-fec = { path = "crates/wzp-fec" }
wzp-crypto = { path = "crates/wzp-crypto" } wzp-crypto = { path = "crates/wzp-crypto" }
wzp-transport = { path = "crates/wzp-transport" } wzp-transport = { path = "crates/wzp-transport" }
wzp-client = { path = "crates/wzp-client" }

View File

@@ -18,6 +18,10 @@ tracing-subscriber = { workspace = true }
async-trait = { workspace = true } async-trait = { workspace = true }
bytes = { workspace = true } bytes = { workspace = true }
anyhow = "1" anyhow = "1"
serde = { workspace = true }
serde_json = "1"
chrono = "0.4"
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
cpal = { version = "0.15", optional = true } cpal = { version = "0.15", optional = true }
[features] [features]

View File

@@ -2,17 +2,21 @@
//! //!
//! Pipeline: mic → encode → FEC → encrypt → send / recv → decrypt → FEC → decode → speaker //! Pipeline: mic → encode → FEC → encrypt → send / recv → decrypt → FEC → decode → speaker
use bytes::Bytes; use std::time::{Duration, Instant};
use tracing::{debug, warn};
use bytes::Bytes;
use tracing::{debug, info, warn};
use wzp_codec::{ComfortNoise, NoiseSupressor, SilenceDetector};
use wzp_fec::{RaptorQFecDecoder, RaptorQFecEncoder}; use wzp_fec::{RaptorQFecDecoder, RaptorQFecEncoder};
use wzp_proto::jitter::{JitterBuffer, PlayoutResult}; use wzp_proto::jitter::{JitterBuffer, PlayoutResult};
use wzp_proto::packet::{MediaHeader, MediaPacket}; use wzp_proto::packet::{MediaHeader, MediaPacket, MiniFrameContext};
use wzp_proto::quality::AdaptiveQualityController; use wzp_proto::quality::AdaptiveQualityController;
use wzp_proto::traits::{ use wzp_proto::traits::{
AudioDecoder, AudioEncoder, FecDecoder, FecEncoder, AudioDecoder, AudioEncoder, FecDecoder, FecEncoder,
}; };
use wzp_proto::QualityProfile; use wzp_proto::packet::QualityReport;
use wzp_proto::{CodecId, QualityProfile};
/// Configuration for a call session. /// Configuration for a call session.
pub struct CallConfig { pub struct CallConfig {
@@ -24,6 +28,25 @@ pub struct CallConfig {
pub jitter_max: usize, pub jitter_max: usize,
/// Jitter buffer min depth before playout. /// Jitter buffer min depth before playout.
pub jitter_min: usize, pub jitter_min: usize,
/// Enable silence suppression (default: true).
pub suppression_enabled: bool,
/// RMS threshold for silence detection (default: 100.0 for i16 PCM).
pub silence_threshold_rms: f64,
/// Hangover frames before suppression begins (default: 5 = 100ms at 20ms frames).
pub silence_hangover_frames: u32,
/// Comfort noise amplitude (default: 50).
pub comfort_noise_level: i16,
/// Enable ML-based noise suppression via RNNoise (default: true).
pub noise_suppression: bool,
/// Enable mini-frame header compression (default: true).
/// When enabled, only every 50th frame carries a full 12-byte MediaHeader;
/// intermediate frames use a compact 4-byte MiniHeader.
pub mini_frames_enabled: bool,
/// Enable adaptive jitter buffer (default: true).
///
/// When true, the jitter buffer target depth is automatically adjusted
/// based on observed inter-arrival jitter (NetEq-inspired algorithm).
pub adaptive_jitter: bool,
} }
impl Default for CallConfig { impl Default for CallConfig {
@@ -33,6 +56,137 @@ impl Default for CallConfig {
jitter_target: 10, jitter_target: 10,
jitter_max: 250, jitter_max: 250,
jitter_min: 3, // 60ms — low latency start, still smooths jitter jitter_min: 3, // 60ms — low latency start, still smooths jitter
suppression_enabled: true,
silence_threshold_rms: 100.0,
silence_hangover_frames: 5,
comfort_noise_level: 50,
noise_suppression: true,
mini_frames_enabled: true,
adaptive_jitter: true,
}
}
}
impl CallConfig {
/// Build a `CallConfig` tuned for the given quality profile.
pub fn from_profile(profile: QualityProfile) -> Self {
let (jitter_target, jitter_max, jitter_min) = if profile == QualityProfile::CATASTROPHIC {
// Catastrophic: larger jitter buffer to absorb spikes
(20, 500, 8)
} else if profile == QualityProfile::DEGRADED {
// Degraded: moderately deeper buffer
(15, 350, 5)
} else {
// Good: low-latency defaults
(10, 250, 3)
};
Self {
profile,
jitter_target,
jitter_max,
jitter_min,
..Default::default()
}
}
}
/// Sliding-window quality adapter that reacts to relay `QualityReport`s.
///
/// Thresholds (per-report):
/// - loss > 15% OR rtt > 200ms => CATASTROPHIC
/// - loss > 5% OR rtt > 100ms => DEGRADED
/// - otherwise => GOOD
///
/// Hysteresis: a profile switch is only recommended after the new profile
/// has been the recommendation for 3 or more consecutive reports.
pub struct QualityAdapter {
/// Sliding window of the last N reports.
window: std::collections::VecDeque<QualityReport>,
/// Maximum window size.
max_window: usize,
/// Number of consecutive reports recommending the same (non-current) profile.
consecutive_same: u32,
/// The profile that the last `consecutive_same` reports recommended.
pending_profile: Option<QualityProfile>,
}
/// Number of consecutive reports required before accepting a switch.
const HYSTERESIS_COUNT: u32 = 3;
/// Default sliding window capacity.
const ADAPTER_WINDOW: usize = 10;
impl QualityAdapter {
pub fn new() -> Self {
Self {
window: std::collections::VecDeque::with_capacity(ADAPTER_WINDOW),
max_window: ADAPTER_WINDOW,
consecutive_same: 0,
pending_profile: None,
}
}
/// Record a new quality report from the relay.
pub fn ingest(&mut self, report: &QualityReport) {
if self.window.len() >= self.max_window {
self.window.pop_front();
}
self.window.push_back(*report);
}
/// Classify a single report into a recommended profile.
fn classify(report: &QualityReport) -> QualityProfile {
let loss = report.loss_percent();
let rtt = report.rtt_ms();
if loss > 15.0 || rtt > 200 {
QualityProfile::CATASTROPHIC
} else if loss > 5.0 || rtt > 100 {
QualityProfile::DEGRADED
} else {
QualityProfile::GOOD
}
}
/// Return the best profile based on the most recent report in the window.
pub fn recommended_profile(&self) -> QualityProfile {
match self.window.back() {
Some(report) => Self::classify(report),
None => QualityProfile::GOOD,
}
}
/// Determine if a profile switch should happen, applying hysteresis.
///
/// Returns `Some(new_profile)` only when the recommendation has differed
/// from `current` for at least `HYSTERESIS_COUNT` consecutive reports.
pub fn should_switch(&mut self, current: &QualityProfile) -> Option<QualityProfile> {
let recommended = self.recommended_profile();
if recommended == *current {
// Conditions match current profile — reset pending state.
self.consecutive_same = 0;
self.pending_profile = None;
return None;
}
// Recommended differs from current.
match self.pending_profile {
Some(pending) if pending == recommended => {
self.consecutive_same += 1;
}
_ => {
// New or changed recommendation — restart counter.
self.pending_profile = Some(recommended);
self.consecutive_same = 1;
}
}
if self.consecutive_same >= HYSTERESIS_COUNT {
self.consecutive_same = 0;
self.pending_profile = None;
Some(recommended)
} else {
None
} }
} }
} }
@@ -53,6 +207,24 @@ pub struct CallEncoder {
frame_in_block: u8, frame_in_block: u8,
/// Timestamp counter (ms). /// Timestamp counter (ms).
timestamp_ms: u32, timestamp_ms: u32,
/// Silence detector for suppression.
silence_detector: SilenceDetector,
/// Whether silence suppression is enabled.
suppression_enabled: bool,
/// Total frames suppressed (telemetry).
frames_suppressed: u64,
/// Frames since last CN packet was sent.
cn_counter: u32,
/// Comfort noise amplitude level (stored for CN packet payload).
cn_level: i16,
/// ML-based noise suppressor (RNNoise).
denoiser: NoiseSupressor,
/// Mini-frame compression context (tracks last full header).
mini_context: MiniFrameContext,
/// Whether mini-frame header compression is enabled.
mini_frames_enabled: bool,
/// Frames encoded since the last full header was emitted.
frames_since_full: u32,
} }
impl CallEncoder { impl CallEncoder {
@@ -65,6 +237,35 @@ impl CallEncoder {
block_id: 0, block_id: 0,
frame_in_block: 0, frame_in_block: 0,
timestamp_ms: 0, timestamp_ms: 0,
silence_detector: SilenceDetector::new(
config.silence_threshold_rms,
config.silence_hangover_frames,
),
suppression_enabled: config.suppression_enabled,
frames_suppressed: 0,
cn_counter: 0,
cn_level: config.comfort_noise_level,
denoiser: {
let mut d = NoiseSupressor::new();
d.set_enabled(config.noise_suppression);
d
},
mini_context: MiniFrameContext::default(),
mini_frames_enabled: config.mini_frames_enabled,
frames_since_full: 0,
}
}
/// Serialize a `MediaPacket` for transmission, applying mini-frame
/// compression when enabled.
///
/// Returns compact wire bytes: either `[FRAME_TYPE_FULL][MediaHeader][payload]`
/// or `[FRAME_TYPE_MINI][MiniHeader][payload]`.
pub fn serialize_compact(&mut self, packet: &MediaPacket) -> Bytes {
if self.mini_frames_enabled {
packet.encode_compact(&mut self.mini_context, &mut self.frames_since_full)
} else {
packet.to_bytes()
} }
} }
@@ -73,6 +274,55 @@ impl CallEncoder {
/// Input: 48kHz mono PCM, frame size depends on profile (960 for 20ms, 1920 for 40ms). /// Input: 48kHz mono PCM, frame size depends on profile (960 for 20ms, 1920 for 40ms).
/// Output: one or more MediaPackets to send. /// Output: one or more MediaPackets to send.
pub fn encode_frame(&mut self, pcm: &[i16]) -> Result<Vec<MediaPacket>, anyhow::Error> { pub fn encode_frame(&mut self, pcm: &[i16]) -> Result<Vec<MediaPacket>, anyhow::Error> {
// Noise suppression: denoise the PCM before silence detection and encoding.
let pcm = if self.denoiser.is_enabled() {
let mut buf = pcm.to_vec();
self.denoiser.process(&mut buf);
buf
} else {
pcm.to_vec()
};
let pcm = &pcm[..];
// Silence suppression: skip encoding silent frames, periodically send CN.
if self.suppression_enabled && self.silence_detector.is_silent(pcm) {
self.frames_suppressed += 1;
self.cn_counter += 1;
// Advance timestamp even for suppressed frames.
self.timestamp_ms = self
.timestamp_ms
.wrapping_add(self.profile.frame_duration_ms as u32);
// Every 10 frames (~200ms), send a comfort noise packet.
if self.cn_counter % 10 == 0 {
let cn_pkt = MediaPacket {
header: MediaHeader {
version: 0,
is_repair: false,
codec_id: CodecId::ComfortNoise,
has_quality_report: false,
fec_ratio_encoded: 0,
seq: self.seq,
timestamp: self.timestamp_ms,
fec_block: self.block_id,
fec_symbol: 0,
reserved: 0,
csrc_count: 0,
},
payload: Bytes::from(vec![self.cn_level as u8]),
quality_report: None,
};
self.seq = self.seq.wrapping_add(1);
return Ok(vec![cn_pkt]);
}
return Ok(vec![]);
}
// Not silent — reset CN counter and proceed with normal encoding.
self.cn_counter = 0;
// Encode audio // Encode audio
let mut encoded = vec![0u8; self.audio_enc.max_frame_bytes()]; let mut encoded = vec![0u8; self.audio_enc.max_frame_bytes()];
let enc_len = self.audio_enc.encode(pcm, &mut encoded)?; let enc_len = self.audio_enc.encode(pcm, &mut encoded)?;
@@ -164,19 +414,42 @@ pub struct CallDecoder {
pub quality: AdaptiveQualityController, pub quality: AdaptiveQualityController,
/// Current profile. /// Current profile.
profile: QualityProfile, profile: QualityProfile,
/// Comfort noise generator for filling silent gaps.
comfort_noise: ComfortNoise,
/// Whether the last decoded frame was comfort noise.
last_was_cn: bool,
/// Mini-frame decompression context (tracks last full header baseline).
mini_context: MiniFrameContext,
} }
impl CallDecoder { impl CallDecoder {
pub fn new(config: &CallConfig) -> Self { pub fn new(config: &CallConfig) -> Self {
let jitter = if config.adaptive_jitter {
JitterBuffer::new_adaptive(config.jitter_min, config.jitter_max)
} else {
JitterBuffer::new(config.jitter_target, config.jitter_max, config.jitter_min)
};
Self { Self {
audio_dec: wzp_codec::create_decoder(config.profile), audio_dec: wzp_codec::create_decoder(config.profile),
fec_dec: wzp_fec::create_decoder(&config.profile), fec_dec: wzp_fec::create_decoder(&config.profile),
jitter: JitterBuffer::new(config.jitter_target, config.jitter_max, config.jitter_min), jitter,
quality: AdaptiveQualityController::new(), quality: AdaptiveQualityController::new(),
profile: config.profile, profile: config.profile,
comfort_noise: ComfortNoise::new(50),
last_was_cn: false,
mini_context: MiniFrameContext::default(),
} }
} }
/// Deserialize a compact wire-format buffer into a `MediaPacket`,
/// auto-detecting full vs mini headers.
///
/// Returns `None` on malformed data or if a mini-frame arrives before
/// any full header baseline has been established.
pub fn deserialize_compact(&mut self, buf: &[u8]) -> Option<MediaPacket> {
MediaPacket::decode_compact(buf, &mut self.mini_context)
}
/// Feed a received media packet into the decode pipeline. /// Feed a received media packet into the decode pipeline.
pub fn ingest(&mut self, packet: MediaPacket) { pub fn ingest(&mut self, packet: MediaPacket) {
// Feed to FEC decoder // Feed to FEC decoder
@@ -199,25 +472,46 @@ impl CallDecoder {
pub fn decode_next(&mut self, pcm: &mut [i16]) -> Option<usize> { pub fn decode_next(&mut self, pcm: &mut [i16]) -> Option<usize> {
match self.jitter.pop() { match self.jitter.pop() {
PlayoutResult::Packet(pkt) => { PlayoutResult::Packet(pkt) => {
match self.audio_dec.decode(&pkt.payload, pcm) { // Comfort noise packet: generate CN instead of decoding audio.
if pkt.header.codec_id == CodecId::ComfortNoise {
self.comfort_noise.generate(pcm);
self.last_was_cn = true;
self.jitter.record_decode();
return Some(pcm.len());
}
self.last_was_cn = false;
let result = match self.audio_dec.decode(&pkt.payload, pcm) {
Ok(n) => Some(n), Ok(n) => Some(n),
Err(e) => { Err(e) => {
warn!("decode error: {e}, using PLC"); warn!("decode error: {e}, using PLC");
self.audio_dec.decode_lost(pcm).ok() self.audio_dec.decode_lost(pcm).ok()
} }
};
if result.is_some() {
self.jitter.record_decode();
} }
result
} }
PlayoutResult::Missing { seq } => { PlayoutResult::Missing { seq } => {
// Only generate PLC if there are still packets buffered ahead. // Only generate PLC if there are still packets buffered ahead.
// Otherwise we've drained everything — return None to stop. // Otherwise we've drained everything — return None to stop.
if self.jitter.depth() > 0 { if self.jitter.depth() > 0 {
debug!(seq, "packet loss, generating PLC"); debug!(seq, "packet loss, generating PLC");
self.audio_dec.decode_lost(pcm).ok() let result = self.audio_dec.decode_lost(pcm).ok();
if result.is_some() {
self.jitter.record_decode();
}
result
} else { } else {
self.jitter.record_underrun();
None None
} }
} }
PlayoutResult::NotReady => None, PlayoutResult::NotReady => {
self.jitter.record_underrun();
None
}
} }
} }
@@ -227,8 +521,54 @@ impl CallDecoder {
} }
/// Get jitter buffer statistics. /// Get jitter buffer statistics.
pub fn jitter_stats(&self) -> wzp_proto::jitter::JitterStats { pub fn stats(&self) -> &wzp_proto::jitter::JitterStats {
self.jitter.stats().clone() self.jitter.stats()
}
/// Reset jitter buffer statistics counters.
pub fn reset_stats(&mut self) {
self.jitter.reset_stats();
}
}
/// Periodic telemetry logger for jitter buffer statistics.
///
/// Call `maybe_log` on each decode tick; it will emit a `tracing::info!` event
/// no more frequently than the configured interval.
pub struct JitterTelemetry {
interval: Duration,
last_report: Instant,
}
impl JitterTelemetry {
/// Create a new telemetry logger that reports at most once per `interval_secs`.
pub fn new(interval_secs: u64) -> Self {
Self {
interval: Duration::from_secs(interval_secs),
last_report: Instant::now(),
}
}
/// Log jitter statistics if the interval has elapsed. Returns `true` when a
/// log line was emitted.
pub fn maybe_log(&mut self, stats: &wzp_proto::jitter::JitterStats) -> bool {
let now = Instant::now();
if now.duration_since(self.last_report) >= self.interval {
info!(
buffer_depth = stats.current_depth,
underruns = stats.underruns,
overruns = stats.overruns,
late_packets = stats.packets_late,
total_received = stats.packets_received,
total_decoded = stats.total_decoded,
max_depth_seen = stats.max_depth_seen,
"jitter buffer telemetry"
);
self.last_report = now;
true
} else {
false
}
} }
} }
@@ -301,4 +641,279 @@ mod tests {
let mut pcm = vec![0i16; 960]; let mut pcm = vec![0i16; 960];
assert!(dec.decode_next(&mut pcm).is_none()); assert!(dec.decode_next(&mut pcm).is_none());
} }
// ---- QualityAdapter tests ----
/// Helper: build a QualityReport from human-readable loss% and RTT ms.
fn make_report(loss_pct_f: f32, rtt_ms: u16) -> QualityReport {
QualityReport {
loss_pct: (loss_pct_f / 100.0 * 255.0) as u8,
rtt_4ms: (rtt_ms / 4) as u8,
jitter_ms: 10,
bitrate_cap_kbps: 200,
}
}
#[test]
fn good_conditions_stays_good() {
let mut adapter = QualityAdapter::new();
let good = make_report(1.0, 40);
for _ in 0..10 {
adapter.ingest(&good);
}
assert_eq!(adapter.recommended_profile(), QualityProfile::GOOD);
let current = QualityProfile::GOOD;
for _ in 0..10 {
adapter.ingest(&good);
assert!(adapter.should_switch(&current).is_none());
}
}
#[test]
fn high_loss_degrades() {
let mut adapter = QualityAdapter::new();
// 8% loss, low RTT => DEGRADED
let degraded = make_report(8.0, 40);
let mut current = QualityProfile::GOOD;
// Feed 3 consecutive degraded reports to pass hysteresis
for _ in 0..3 {
adapter.ingest(&degraded);
if let Some(new) = adapter.should_switch(&current) {
current = new;
}
}
assert_eq!(current, QualityProfile::DEGRADED);
}
#[test]
fn catastrophic_conditions() {
let mut adapter = QualityAdapter::new();
// 20% loss => CATASTROPHIC
let terrible = make_report(20.0, 50);
let mut current = QualityProfile::GOOD;
for _ in 0..3 {
adapter.ingest(&terrible);
if let Some(new) = adapter.should_switch(&current) {
current = new;
}
}
assert_eq!(current, QualityProfile::CATASTROPHIC);
// Also test via high RTT alone (250ms > 200ms threshold)
let mut adapter2 = QualityAdapter::new();
let high_rtt = make_report(1.0, 252); // rtt_4ms rounds to 63 => 252ms
let mut current2 = QualityProfile::GOOD;
for _ in 0..3 {
adapter2.ingest(&high_rtt);
if let Some(new) = adapter2.should_switch(&current2) {
current2 = new;
}
}
assert_eq!(current2, QualityProfile::CATASTROPHIC);
}
#[test]
fn hysteresis_prevents_flapping() {
let mut adapter = QualityAdapter::new();
let good = make_report(1.0, 40);
let bad = make_report(8.0, 40); // DEGRADED
let current = QualityProfile::GOOD;
// Alternate good/bad — should never trigger a switch because
// we never get 3 consecutive same-recommendation reports.
for _ in 0..20 {
adapter.ingest(&bad);
assert!(adapter.should_switch(&current).is_none());
adapter.ingest(&good);
assert!(adapter.should_switch(&current).is_none());
}
assert_eq!(current, QualityProfile::GOOD);
}
#[test]
fn recovery_to_good() {
let mut adapter = QualityAdapter::new();
let bad = make_report(20.0, 50);
let good = make_report(1.0, 40);
// Drive to CATASTROPHIC first
let mut current = QualityProfile::GOOD;
for _ in 0..3 {
adapter.ingest(&bad);
if let Some(new) = adapter.should_switch(&current) {
current = new;
}
}
assert_eq!(current, QualityProfile::CATASTROPHIC);
// Now feed good reports — should recover to GOOD after 3 consecutive
for _ in 0..3 {
adapter.ingest(&good);
if let Some(new) = adapter.should_switch(&current) {
current = new;
}
}
assert_eq!(current, QualityProfile::GOOD);
}
#[test]
fn call_config_from_profile() {
let good = CallConfig::from_profile(QualityProfile::GOOD);
assert_eq!(good.profile, QualityProfile::GOOD);
assert_eq!(good.jitter_min, 3);
let degraded = CallConfig::from_profile(QualityProfile::DEGRADED);
assert_eq!(degraded.profile, QualityProfile::DEGRADED);
assert!(degraded.jitter_target > good.jitter_target);
let catastrophic = CallConfig::from_profile(QualityProfile::CATASTROPHIC);
assert_eq!(catastrophic.profile, QualityProfile::CATASTROPHIC);
assert!(catastrophic.jitter_max > degraded.jitter_max);
}
// ---- JitterStats telemetry tests ----
fn make_test_packet(seq: u16) -> MediaPacket {
MediaPacket {
header: MediaHeader {
version: 0,
is_repair: false,
codec_id: CodecId::Opus24k,
has_quality_report: false,
fec_ratio_encoded: 0,
seq,
timestamp: seq as u32 * 20,
fec_block: 0,
fec_symbol: seq as u8,
reserved: 0,
csrc_count: 0,
},
payload: Bytes::from(vec![0u8; 60]),
quality_report: None,
}
}
#[test]
fn stats_track_ingestion() {
let config = CallConfig::default();
let mut dec = CallDecoder::new(&config);
for i in 0..5u16 {
dec.ingest(make_test_packet(i));
}
let stats = dec.stats();
assert_eq!(stats.packets_received, 5);
assert_eq!(stats.current_depth, 5);
assert_eq!(stats.max_depth_seen, 5);
}
#[test]
fn stats_track_underruns() {
let config = CallConfig::default();
let mut dec = CallDecoder::new(&config);
// Empty buffer — decode_next should record underruns
let mut pcm = vec![0i16; 960];
dec.decode_next(&mut pcm);
dec.decode_next(&mut pcm);
dec.decode_next(&mut pcm);
assert_eq!(dec.stats().underruns, 3);
}
#[test]
fn stats_reset() {
let config = CallConfig::default();
let mut dec = CallDecoder::new(&config);
// Generate some stats: ingest packets and trigger underruns on empty buffer
for i in 0..3u16 {
dec.ingest(make_test_packet(i));
}
// Also call decode on empty decoder to get underruns
let config2 = CallConfig::default();
let mut dec2 = CallDecoder::new(&config2);
let mut pcm = vec![0i16; 960];
dec2.decode_next(&mut pcm); // underrun — nothing in buffer
assert!(dec.stats().packets_received > 0);
assert!(dec2.stats().underruns > 0);
// Test reset on the decoder with ingested packets
dec.reset_stats();
let stats = dec.stats();
assert_eq!(stats.packets_received, 0);
assert_eq!(stats.underruns, 0);
assert_eq!(stats.overruns, 0);
assert_eq!(stats.total_decoded, 0);
assert_eq!(stats.packets_late, 0);
assert_eq!(stats.max_depth_seen, 0);
// Test reset on the decoder with underruns
dec2.reset_stats();
assert_eq!(dec2.stats().underruns, 0);
}
#[test]
fn telemetry_respects_interval() {
use wzp_proto::jitter::JitterStats;
let mut telemetry = JitterTelemetry::new(60); // 60-second interval
let stats = JitterStats::default();
// First call right after creation — should not log because no time has passed
// (the interval hasn't elapsed since construction)
let logged = telemetry.maybe_log(&stats);
assert!(!logged, "should not log before interval elapses");
}
#[test]
fn silence_suppression_skips_silent_frames() {
let config = CallConfig {
suppression_enabled: true,
silence_threshold_rms: 100.0,
silence_hangover_frames: 5,
comfort_noise_level: 50,
..Default::default()
};
let mut enc = CallEncoder::new(&config);
let silence = vec![0i16; 960];
let mut total_packets = 0;
let mut cn_packets = 0;
for _ in 0..20 {
let packets = enc.encode_frame(&silence).unwrap();
for p in &packets {
if p.header.codec_id == CodecId::ComfortNoise {
cn_packets += 1;
// CN payload should be a single byte with the noise level.
assert_eq!(p.payload.len(), 1);
}
}
total_packets += packets.len();
}
// First 5 frames are hangover (not suppressed) => 5 normal source packets
// (plus potential repair packets from FEC block completion).
// Remaining 15 frames are suppressed; CN every 10 frames => 1 CN packet
// (cn_counter hits 10 on the 10th suppressed frame).
assert!(
total_packets < 20,
"suppression should reduce packet count, got {total_packets}"
);
assert!(
cn_packets >= 1,
"should have at least one CN packet, got {cn_packets}"
);
assert!(
enc.frames_suppressed > 0,
"frames_suppressed should be > 0"
);
}
} }

View File

@@ -40,6 +40,38 @@ struct CliArgs {
send_file: Option<String>, send_file: Option<String>,
record_file: Option<String>, record_file: Option<String>,
echo_test_secs: Option<u32>, echo_test_secs: Option<u32>,
drift_test_secs: Option<u32>,
sweep: bool,
seed_hex: Option<String>,
mnemonic: Option<String>,
room: Option<String>,
token: Option<String>,
_metrics_file: Option<String>,
}
impl CliArgs {
/// Resolve the identity seed from --seed, --mnemonic, or generate a new one.
pub fn resolve_seed(&self) -> wzp_crypto::Seed {
if let Some(ref hex_str) = self.seed_hex {
let seed = wzp_crypto::Seed::from_hex(hex_str).expect("invalid --seed hex");
let id = seed.derive_identity();
let fp = id.public_identity().fingerprint;
info!(fingerprint = %fp, "identity from --seed");
seed
} else if let Some(ref words) = self.mnemonic {
let seed = wzp_crypto::Seed::from_mnemonic(words).expect("invalid --mnemonic");
let id = seed.derive_identity();
let fp = id.public_identity().fingerprint;
info!(fingerprint = %fp, "identity from --mnemonic");
seed
} else {
let seed = wzp_crypto::Seed::generate();
let id = seed.derive_identity();
let fp = id.public_identity().fingerprint;
info!(fingerprint = %fp, "generated ephemeral identity");
seed
}
}
} }
fn parse_args() -> CliArgs { fn parse_args() -> CliArgs {
@@ -49,6 +81,13 @@ fn parse_args() -> CliArgs {
let mut send_file = None; let mut send_file = None;
let mut record_file = None; let mut record_file = None;
let mut echo_test_secs = None; let mut echo_test_secs = None;
let mut drift_test_secs = None;
let mut sweep = false;
let mut seed_hex = None;
let mut mnemonic = None;
let mut room = None;
let mut token = None;
let mut metrics_file = None;
let mut relay_str = None; let mut relay_str = None;
let mut i = 1; let mut i = 1;
@@ -72,6 +111,37 @@ fn parse_args() -> CliArgs {
.to_string(), .to_string(),
); );
} }
"--seed" => {
i += 1;
seed_hex = Some(args.get(i).expect("--seed requires hex string").to_string());
}
"--mnemonic" => {
// Consume all remaining words until next flag or end
i += 1;
let mut words = Vec::new();
while i < args.len() && !args[i].starts_with('-') {
words.push(args[i].clone());
i += 1;
}
i -= 1; // back up since outer loop will increment
mnemonic = Some(words.join(" "));
}
"--room" => {
i += 1;
room = Some(args.get(i).expect("--room requires a name").to_string());
}
"--token" => {
i += 1;
token = Some(args.get(i).expect("--token requires a value").to_string());
}
"--metrics-file" => {
i += 1;
metrics_file = Some(
args.get(i)
.expect("--metrics-file requires a path")
.to_string(),
);
}
"--record" => { "--record" => {
i += 1; i += 1;
record_file = Some( record_file = Some(
@@ -89,6 +159,16 @@ fn parse_args() -> CliArgs {
.expect("--echo-test value must be a number"), .expect("--echo-test value must be a number"),
); );
} }
"--drift-test" => {
i += 1;
drift_test_secs = Some(
args.get(i)
.expect("--drift-test requires seconds")
.parse()
.expect("--drift-test value must be a number"),
);
}
"--sweep" => sweep = true,
"--help" | "-h" => { "--help" | "-h" => {
eprintln!("Usage: wzp-client [options] [relay-addr]"); eprintln!("Usage: wzp-client [options] [relay-addr]");
eprintln!(); eprintln!();
@@ -98,6 +178,13 @@ fn parse_args() -> CliArgs {
eprintln!(" --send-file <file> Send a raw PCM file (48kHz mono s16le)"); eprintln!(" --send-file <file> Send a raw PCM file (48kHz mono s16le)");
eprintln!(" --record <file.raw> Record received audio to raw PCM file"); eprintln!(" --record <file.raw> Record received audio to raw PCM file");
eprintln!(" --echo-test <secs> Run automated echo quality test"); eprintln!(" --echo-test <secs> Run automated echo quality test");
eprintln!(" --drift-test <secs> Run automated clock-drift measurement");
eprintln!(" --sweep Run jitter buffer parameter sweep (local, no network)");
eprintln!(" --seed <hex> Identity seed (64 hex chars, featherChat compatible)");
eprintln!(" --mnemonic <words...> Identity seed as BIP39 mnemonic (24 words)");
eprintln!(" --room <name> Room name (hashed for privacy before sending)");
eprintln!(" --token <token> featherChat bearer token for relay auth");
eprintln!(" --metrics-file <path> Write JSONL telemetry to file (1 line/sec)");
eprintln!(" (48kHz mono s16le, play with ffplay -f s16le -ar 48000 -ch_layout mono file.raw)"); eprintln!(" (48kHz mono s16le, play with ffplay -f s16le -ar 48000 -ch_layout mono file.raw)");
eprintln!(); eprintln!();
eprintln!("Default relay: 127.0.0.1:4433"); eprintln!("Default relay: 127.0.0.1:4433");
@@ -127,23 +214,52 @@ fn parse_args() -> CliArgs {
send_file, send_file,
record_file, record_file,
echo_test_secs, echo_test_secs,
drift_test_secs,
sweep,
seed_hex,
mnemonic,
room,
token,
_metrics_file: metrics_file,
} }
} }
#[tokio::main] #[tokio::main]
async fn main() -> anyhow::Result<()> { async fn main() -> anyhow::Result<()> {
tracing_subscriber::fmt().init(); tracing_subscriber::fmt().init();
rustls::crypto::ring::default_provider()
.install_default()
.expect("failed to install rustls crypto provider");
let cli = parse_args(); let cli = parse_args();
// --sweep runs locally (no network), so handle it before connecting.
if cli.sweep {
wzp_client::sweep::run_and_print_default_sweep();
return Ok(());
}
let seed = cli.resolve_seed();
info!( info!(
relay = %cli.relay_addr, relay = %cli.relay_addr,
live = cli.live, live = cli.live,
send_tone = ?cli.send_tone_secs, send_tone = ?cli.send_tone_secs,
record = ?cli.record_file, record = ?cli.record_file,
room = ?cli.room,
"WarzonePhone client" "WarzonePhone client"
); );
// Hash room name for SNI privacy (or "default" if none specified)
let sni = match &cli.room {
Some(name) => {
let hashed = wzp_crypto::hash_room_name(name);
info!(room = %name, hashed = %hashed, "room name hashed for SNI");
hashed
}
None => "default".to_string(),
};
let client_config = wzp_transport::client_config(); let client_config = wzp_transport::client_config();
let bind_addr = if cli.relay_addr.is_ipv6() { let bind_addr = if cli.relay_addr.is_ipv6() {
"[::]:0".parse()? "[::]:0".parse()?
@@ -152,12 +268,28 @@ async fn main() -> anyhow::Result<()> {
}; };
let endpoint = wzp_transport::create_endpoint(bind_addr, None)?; let endpoint = wzp_transport::create_endpoint(bind_addr, None)?;
let connection = let connection =
wzp_transport::connect(&endpoint, cli.relay_addr, "localhost", client_config).await?; wzp_transport::connect(&endpoint, cli.relay_addr, &sni, client_config).await?;
info!("Connected to relay"); info!("Connected to relay");
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection)); let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
// Send auth token if provided (relay with --auth-url expects this first)
if let Some(ref token) = cli.token {
let auth = wzp_proto::SignalMessage::AuthToken {
token: token.clone(),
};
transport.send_signal(&auth).await?;
info!("auth token sent");
}
// Crypto handshake — establishes verified identity + session key
let _crypto_session = wzp_client::handshake::perform_handshake(
&*transport,
&seed.0,
).await?;
info!("crypto handshake complete");
if cli.live { if cli.live {
#[cfg(feature = "audio")] #[cfg(feature = "audio")]
{ {
@@ -172,6 +304,15 @@ async fn main() -> anyhow::Result<()> {
wzp_client::echo_test::print_report(&result); wzp_client::echo_test::print_report(&result);
transport.close().await?; transport.close().await?;
Ok(()) Ok(())
} else if let Some(secs) = cli.drift_test_secs {
let config = wzp_client::drift_test::DriftTestConfig {
duration_secs: secs,
tone_freq_hz: 440.0,
};
let result = wzp_client::drift_test::run_drift_test(&*transport, &config).await?;
wzp_client::drift_test::print_drift_report(&result);
transport.close().await?;
Ok(())
} else if cli.send_tone_secs.is_some() || cli.send_file.is_some() || cli.record_file.is_some() { } else if cli.send_tone_secs.is_some() || cli.send_file.is_some() || cli.record_file.is_some() {
run_file_mode(transport, cli.send_tone_secs, cli.send_file, cli.record_file).await run_file_mode(transport, cli.send_tone_secs, cli.send_file, cli.record_file).await
} else { } else {
@@ -218,6 +359,10 @@ async fn run_silence(transport: Arc<wzp_transport::QuinnTransport>) -> anyhow::R
} }
info!(total_source, total_repair, total_bytes, "done — closing"); info!(total_source, total_repair, total_bytes, "done — closing");
let hangup = wzp_proto::SignalMessage::Hangup {
reason: wzp_proto::HangupReason::Normal,
};
transport.send_signal(&hangup).await.ok();
transport.close().await?; transport.close().await?;
Ok(()) Ok(())
} }
@@ -364,16 +509,20 @@ async fn run_file_mode(
// Wait for send to finish (or ctrl+c in recv) // Wait for send to finish (or ctrl+c in recv)
let _ = send_handle.await; let _ = send_handle.await;
// If send finished but recv is still going, give it a moment then stop // Send Hangup signal so the relay knows we're done
let hangup = wzp_proto::SignalMessage::Hangup {
reason: wzp_proto::HangupReason::Normal,
};
transport.send_signal(&hangup).await.ok();
let all_pcm = if record_file.is_some() { let all_pcm = if record_file.is_some() {
// Wait a bit for remaining packets after sender finishes
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await; tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
// The recv task will be aborted when we drop it, but first
// let's signal it by closing transport
transport.close().await?; transport.close().await?;
recv_handle.await.unwrap_or_default() recv_handle.await.unwrap_or_default()
} else { } else {
recv_handle.await.unwrap_or_default() transport.close().await?;
recv_handle.abort();
Vec::new()
}; };
// Write recorded audio to file // Write recorded audio to file

View File

@@ -0,0 +1,293 @@
//! Automated clock-drift measurement tool.
//!
//! Sends N seconds of a known 440 Hz tone through the transport, records
//! received frame timestamps on the other side, and compares actual received
//! duration vs expected duration to quantify timing drift and packet loss.
use std::time::{Duration, Instant};
use tracing::info;
use wzp_proto::MediaTransport;
use crate::call::{CallConfig, CallDecoder, CallEncoder};
const FRAME_SAMPLES: usize = 960; // 20ms @ 48kHz
const SAMPLE_RATE: u32 = 48_000;
/// Configuration for a drift measurement run.
#[derive(Debug, Clone)]
pub struct DriftTestConfig {
/// How many seconds of tone to send.
pub duration_secs: u32,
/// Frequency of the test tone (Hz).
pub tone_freq_hz: f32,
}
impl Default for DriftTestConfig {
fn default() -> Self {
Self {
duration_secs: 10,
tone_freq_hz: 440.0,
}
}
}
/// Results from a drift measurement run.
#[derive(Debug, Clone)]
pub struct DriftResult {
/// Expected duration in milliseconds (`duration_secs * 1000`).
pub expected_duration_ms: u64,
/// Actual measured duration in milliseconds (last_recv - first_recv).
pub actual_duration_ms: u64,
/// Drift: `actual - expected` (positive = receiver clock ran slow / packets delayed).
pub drift_ms: i64,
/// Drift as a percentage of expected duration.
pub drift_pct: f64,
/// Total frames sent by the sender.
pub frames_sent: u64,
/// Total frames successfully received and decoded.
pub frames_received: u64,
/// Packet loss percentage: `(1 - frames_received / frames_sent) * 100`.
pub loss_pct: f64,
}
impl DriftResult {
/// Compute a `DriftResult` from raw counters and timestamps.
pub fn compute(
expected_duration_ms: u64,
actual_duration_ms: u64,
frames_sent: u64,
frames_received: u64,
) -> Self {
let drift_ms = actual_duration_ms as i64 - expected_duration_ms as i64;
let drift_pct = if expected_duration_ms > 0 {
drift_ms as f64 / expected_duration_ms as f64 * 100.0
} else {
0.0
};
let loss_pct = if frames_sent > 0 {
(1.0 - frames_received as f64 / frames_sent as f64) * 100.0
} else {
0.0
};
Self {
expected_duration_ms,
actual_duration_ms,
drift_ms,
drift_pct,
frames_sent,
frames_received,
loss_pct,
}
}
}
/// Generate a sine wave frame at a given frequency.
fn sine_frame(freq_hz: f32, frame_offset: u64) -> Vec<i16> {
let start = frame_offset * FRAME_SAMPLES as u64;
(0..FRAME_SAMPLES)
.map(|i| {
let t = (start + i as u64) as f32 / SAMPLE_RATE as f32;
(f32::sin(2.0 * std::f32::consts::PI * freq_hz * t) * 16000.0) as i16
})
.collect()
}
/// Run the drift measurement test.
///
/// 1. Spawns a send task that encodes `duration_secs` of tone at 20 ms intervals.
/// 2. Spawns a recv task that counts decoded frames and tracks first/last timestamps.
/// 3. After the sender finishes, waits 2 seconds for trailing packets.
/// 4. Computes and returns the `DriftResult`.
pub async fn run_drift_test(
transport: &(dyn MediaTransport + Send + Sync),
config: &DriftTestConfig,
) -> anyhow::Result<DriftResult> {
let call_config = CallConfig::default();
let mut encoder = CallEncoder::new(&call_config);
let mut decoder = CallDecoder::new(&call_config);
let total_frames: u64 = config.duration_secs as u64 * 50; // 50 frames/s at 20 ms
let frame_duration = Duration::from_millis(20);
let mut pcm_buf = vec![0i16; FRAME_SAMPLES];
let mut frames_sent: u64 = 0;
let mut frames_received: u64 = 0;
let mut first_recv_time: Option<Instant> = None;
let mut last_recv_time: Option<Instant> = None;
info!(
duration_secs = config.duration_secs,
tone_hz = config.tone_freq_hz,
total_frames = total_frames,
"starting drift measurement"
);
let start = Instant::now();
// Send + interleaved receive loop (same pattern as echo_test)
for frame_idx in 0..total_frames {
// --- send ---
let pcm = sine_frame(config.tone_freq_hz, frame_idx);
let packets = encoder.encode_frame(&pcm)?;
for pkt in &packets {
transport.send_media(pkt).await?;
}
frames_sent += 1;
// --- try to receive (short window so we don't block the sender) ---
let recv_deadline = Instant::now() + Duration::from_millis(5);
loop {
if Instant::now() >= recv_deadline {
break;
}
match tokio::time::timeout(Duration::from_millis(2), transport.recv_media()).await {
Ok(Ok(Some(pkt))) => {
let is_repair = pkt.header.is_repair;
decoder.ingest(pkt);
if !is_repair {
if let Some(_n) = decoder.decode_next(&mut pcm_buf) {
let now = Instant::now();
if first_recv_time.is_none() {
first_recv_time = Some(now);
}
last_recv_time = Some(now);
frames_received += 1;
}
}
}
_ => break,
}
}
if (frame_idx + 1) % 250 == 0 {
info!(
frame = frame_idx + 1,
sent = frames_sent,
recv = frames_received,
elapsed = format!("{:.1}s", start.elapsed().as_secs_f64()),
"drift-test progress"
);
}
tokio::time::sleep(frame_duration).await;
}
// Drain trailing packets for 2 seconds
info!("sender done, draining trailing packets for 2s...");
let drain_deadline = Instant::now() + Duration::from_secs(2);
while Instant::now() < drain_deadline {
match tokio::time::timeout(Duration::from_millis(100), transport.recv_media()).await {
Ok(Ok(Some(pkt))) => {
let is_repair = pkt.header.is_repair;
decoder.ingest(pkt);
if !is_repair {
if let Some(_n) = decoder.decode_next(&mut pcm_buf) {
let now = Instant::now();
if first_recv_time.is_none() {
first_recv_time = Some(now);
}
last_recv_time = Some(now);
frames_received += 1;
}
}
}
_ => break,
}
}
// Compute result
let expected_duration_ms = config.duration_secs as u64 * 1000;
let actual_duration_ms = match (first_recv_time, last_recv_time) {
(Some(first), Some(last)) => last.duration_since(first).as_millis() as u64,
_ => 0,
};
let result = DriftResult::compute(
expected_duration_ms,
actual_duration_ms,
frames_sent,
frames_received,
);
info!(
expected_ms = result.expected_duration_ms,
actual_ms = result.actual_duration_ms,
drift_ms = result.drift_ms,
drift_pct = format!("{:.4}%", result.drift_pct),
loss_pct = format!("{:.1}%", result.loss_pct),
"drift measurement complete"
);
Ok(result)
}
/// Pretty-print the drift measurement results.
pub fn print_drift_report(result: &DriftResult) {
println!();
println!("=== Drift Measurement Report ===");
println!();
println!("Frames sent: {}", result.frames_sent);
println!("Frames received: {}", result.frames_received);
println!("Packet loss: {:.1}%", result.loss_pct);
println!();
println!("Expected duration: {} ms", result.expected_duration_ms);
println!("Actual duration: {} ms", result.actual_duration_ms);
println!("Drift: {} ms ({:+.4}%)", result.drift_ms, result.drift_pct);
println!();
// Interpretation
let abs_drift = result.drift_ms.unsigned_abs();
if result.frames_received == 0 {
println!("WARNING: No frames received. Transport may be non-functional.");
} else if abs_drift < 5 {
println!("Result: EXCELLENT -- drift is negligible (<5 ms).");
} else if abs_drift < 20 {
println!("Result: GOOD -- drift is within acceptable bounds (<20 ms).");
} else if abs_drift < 100 {
println!("Result: FAIR -- noticeable drift ({} ms). Clock sync may be needed.", abs_drift);
} else {
println!("Result: POOR -- significant drift ({} ms). Investigate clock sources.", abs_drift);
}
println!();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn drift_result_calculations() {
// Perfect case: no drift, no loss
let r = DriftResult::compute(10_000, 10_000, 500, 500);
assert_eq!(r.drift_ms, 0);
assert!((r.drift_pct - 0.0).abs() < f64::EPSILON);
assert!((r.loss_pct - 0.0).abs() < f64::EPSILON);
// Positive drift (receiver duration longer than expected)
let r = DriftResult::compute(10_000, 10_050, 500, 490);
assert_eq!(r.drift_ms, 50);
assert!((r.drift_pct - 0.5).abs() < 1e-9); // 50/10000 * 100 = 0.5%
assert!((r.loss_pct - 2.0).abs() < 1e-9); // (1 - 490/500) * 100 = 2.0%
// Negative drift (receiver duration shorter than expected)
let r = DriftResult::compute(10_000, 9_900, 500, 450);
assert_eq!(r.drift_ms, -100);
assert!((r.drift_pct - (-1.0)).abs() < 1e-9); // -100/10000 * 100 = -1.0%
assert!((r.loss_pct - 10.0).abs() < 1e-9); // (1 - 450/500) * 100 = 10.0%
// Edge: zero frames sent (avoid division by zero)
let r = DriftResult::compute(0, 0, 0, 0);
assert_eq!(r.drift_ms, 0);
assert!((r.drift_pct - 0.0).abs() < f64::EPSILON);
assert!((r.loss_pct - 0.0).abs() < f64::EPSILON);
}
#[test]
fn drift_config_defaults() {
let cfg = DriftTestConfig::default();
assert_eq!(cfg.duration_secs, 10);
assert!((cfg.tone_freq_hz - 440.0).abs() < f32::EPSILON);
}
}

View File

@@ -266,7 +266,7 @@ pub async fn run_echo_test(
} }
} }
let jitter_stats = decoder.jitter_stats(); let jitter_stats = decoder.stats().clone();
let total_frames_received = recv_pcm.len() as u64 / FRAME_SAMPLES as u64; let total_frames_received = recv_pcm.len() as u64 / FRAME_SAMPLES as u64;
let overall_loss = if total_frames > 0 { let overall_loss = if total_frames > 0 {
(1.0 - total_frames_received as f32 / total_frames as f32) * 100.0 (1.0 - total_frames_received as f32 / total_frames as f32) * 100.0

View File

@@ -0,0 +1,157 @@
//! featherChat signaling bridge.
//!
//! Sends WZP call signaling (Offer/Answer/Hangup) through featherChat's
//! E2E encrypted WebSocket channel as `WireMessage::CallSignal`.
//!
//! Flow:
//! 1. Client connects to featherChat WS with bearer token
//! 2. Sends CallOffer as CallSignal(signal_type=Offer, payload=JSON SignalMessage)
//! 3. Receives CallAnswer as CallSignal(signal_type=Answer, payload=JSON SignalMessage)
//! 4. Extracts relay address from the answer
//! 5. Connects QUIC to relay for media
use serde::{Deserialize, Serialize};
use wzp_proto::packet::SignalMessage;
/// featherChat CallSignal types (mirrors warzone-protocol::message::CallSignalType).
#[derive(Clone, Debug, Serialize, Deserialize)]
pub enum CallSignalType {
Offer,
Answer,
IceCandidate,
Hangup,
Reject,
Ringing,
Busy,
Hold,
Unhold,
Mute,
Unmute,
Transfer,
}
/// A CallSignal as sent through featherChat's WireMessage.
/// This is what goes in the `payload` field of `WireMessage::CallSignal`.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct WzpCallPayload {
/// The WZP SignalMessage (CallOffer, CallAnswer, etc.) serialized as JSON.
pub signal: SignalMessage,
/// The relay address to connect to for media (host:port).
pub relay_addr: Option<String>,
/// Room name on the relay.
pub room: Option<String>,
}
/// Parameters for initiating a call through featherChat.
pub struct CallInitParams {
/// featherChat server URL (e.g., "wss://chat.example.com/ws").
pub server_url: String,
/// Bearer token for authentication.
pub token: String,
/// Target peer fingerprint (who to call).
pub target_fingerprint: String,
/// Relay address for media transport.
pub relay_addr: String,
/// Room name on the relay.
pub room: String,
/// Our identity seed for crypto.
pub seed: [u8; 32],
}
/// Result of a successful call setup.
pub struct CallSetupResult {
/// Relay address to connect to.
pub relay_addr: String,
/// Room name.
pub room: String,
/// The peer's CallAnswer signal (contains ephemeral key, etc.)
pub answer: SignalMessage,
}
/// Serialize a WZP SignalMessage into a featherChat CallSignal payload string.
pub fn encode_call_payload(
signal: &SignalMessage,
relay_addr: Option<&str>,
room: Option<&str>,
) -> String {
let payload = WzpCallPayload {
signal: signal.clone(),
relay_addr: relay_addr.map(|s| s.to_string()),
room: room.map(|s| s.to_string()),
};
serde_json::to_string(&payload).unwrap_or_default()
}
/// Deserialize a featherChat CallSignal payload back to WZP types.
pub fn decode_call_payload(payload: &str) -> Result<WzpCallPayload, String> {
serde_json::from_str(payload).map_err(|e| format!("invalid call payload: {e}"))
}
/// Map WZP SignalMessage type to featherChat CallSignalType.
pub fn signal_to_call_type(signal: &SignalMessage) -> CallSignalType {
match signal {
SignalMessage::CallOffer { .. } => CallSignalType::Offer,
SignalMessage::CallAnswer { .. } => CallSignalType::Answer,
SignalMessage::IceCandidate { .. } => CallSignalType::IceCandidate,
SignalMessage::Hangup { .. } => CallSignalType::Hangup,
SignalMessage::Rekey { .. } => CallSignalType::Offer, // reuse
SignalMessage::QualityUpdate { .. } => CallSignalType::Offer, // reuse
SignalMessage::Ping { .. } | SignalMessage::Pong { .. } => CallSignalType::Offer,
SignalMessage::AuthToken { .. } => CallSignalType::Offer,
SignalMessage::Hold => CallSignalType::Hold,
SignalMessage::Unhold => CallSignalType::Unhold,
SignalMessage::Mute => CallSignalType::Mute,
SignalMessage::Unmute => CallSignalType::Unmute,
SignalMessage::Transfer { .. } => CallSignalType::Transfer,
SignalMessage::TransferAck => CallSignalType::Offer, // reuse
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn payload_roundtrip() {
let signal = SignalMessage::CallOffer {
identity_pub: [1u8; 32],
ephemeral_pub: [2u8; 32],
signature: vec![3u8; 64],
supported_profiles: vec![QualityProfile::GOOD],
};
let encoded = encode_call_payload(&signal, Some("relay.example.com:4433"), Some("myroom"));
let decoded = decode_call_payload(&encoded).unwrap();
assert_eq!(decoded.relay_addr.unwrap(), "relay.example.com:4433");
assert_eq!(decoded.room.unwrap(), "myroom");
assert!(matches!(decoded.signal, SignalMessage::CallOffer { .. }));
}
#[test]
fn signal_type_mapping() {
let offer = SignalMessage::CallOffer {
identity_pub: [0; 32],
ephemeral_pub: [0; 32],
signature: vec![],
supported_profiles: vec![],
};
assert!(matches!(signal_to_call_type(&offer), CallSignalType::Offer));
let hangup = SignalMessage::Hangup {
reason: wzp_proto::HangupReason::Normal,
};
assert!(matches!(signal_to_call_type(&hangup), CallSignalType::Hangup));
assert!(matches!(signal_to_call_type(&SignalMessage::Hold), CallSignalType::Hold));
assert!(matches!(signal_to_call_type(&SignalMessage::Unhold), CallSignalType::Unhold));
assert!(matches!(signal_to_call_type(&SignalMessage::Mute), CallSignalType::Mute));
assert!(matches!(signal_to_call_type(&SignalMessage::Unmute), CallSignalType::Unmute));
let transfer = SignalMessage::Transfer {
target_fingerprint: "abc".to_string(),
relay_addr: None,
};
assert!(matches!(signal_to_call_type(&transfer), CallSignalType::Transfer));
}
}

View File

@@ -10,8 +10,12 @@
pub mod audio_io; pub mod audio_io;
pub mod bench; pub mod bench;
pub mod call; pub mod call;
pub mod drift_test;
pub mod echo_test; pub mod echo_test;
pub mod featherchat;
pub mod handshake; pub mod handshake;
pub mod metrics;
pub mod sweep;
#[cfg(feature = "audio")] #[cfg(feature = "audio")]
pub use audio_io::{AudioCapture, AudioPlayback}; pub use audio_io::{AudioCapture, AudioPlayback};

View File

@@ -0,0 +1,186 @@
//! Client-side JSONL metrics export.
//!
//! When `--metrics-file <path>` is passed, the client writes one JSON object
//! per second to the specified file. Each line is a self-contained JSON object
//! (JSONL format) containing jitter buffer stats, loss, and quality profile.
use std::fs::{File, OpenOptions};
use std::io::Write;
use std::time::{Duration, Instant};
use serde::Serialize;
use wzp_proto::jitter::JitterStats;
/// A single metrics snapshot written as one JSONL line.
#[derive(Serialize)]
pub struct ClientMetricsSnapshot {
pub ts: String,
pub buffer_depth: usize,
pub underruns: u64,
pub overruns: u64,
pub loss_pct: f64,
pub rtt_ms: u64,
pub jitter_ms: u64,
pub frames_sent: u64,
pub frames_received: u64,
pub quality_profile: String,
}
/// Periodic JSONL writer that respects a configurable interval.
pub struct MetricsWriter {
file: File,
interval: Duration,
last_write: Instant,
}
impl MetricsWriter {
/// Create a new `MetricsWriter` that appends JSONL to the given path.
///
/// The file is created (or truncated) immediately.
pub fn new(path: &str, interval_secs: u64) -> Result<Self, anyhow::Error> {
let file = OpenOptions::new()
.create(true)
.write(true)
.truncate(true)
.open(path)?;
Ok(Self {
file,
interval: Duration::from_secs(interval_secs),
// Set last_write far in the past so the first call writes immediately.
last_write: Instant::now() - Duration::from_secs(interval_secs + 1),
})
}
/// Write a JSONL line if the interval has elapsed since the last write.
///
/// Returns `Ok(true)` when a line was written, `Ok(false)` when skipped.
pub fn maybe_write(&mut self, snapshot: &ClientMetricsSnapshot) -> Result<bool, anyhow::Error> {
let now = Instant::now();
if now.duration_since(self.last_write) >= self.interval {
let line = serde_json::to_string(snapshot)?;
writeln!(self.file, "{}", line)?;
self.file.flush()?;
self.last_write = now;
Ok(true)
} else {
Ok(false)
}
}
}
/// Build a `ClientMetricsSnapshot` from jitter buffer stats and a quality profile name.
///
/// Fields not available from `JitterStats` alone (rtt_ms, jitter_ms, frames_sent)
/// are set to zero — the caller can override them if the data is available.
pub fn snapshot_from_stats(stats: &JitterStats, profile: &str) -> ClientMetricsSnapshot {
let loss_pct = if stats.packets_received > 0 {
(stats.packets_lost as f64 / stats.packets_received as f64) * 100.0
} else {
0.0
};
ClientMetricsSnapshot {
ts: chrono::Utc::now().to_rfc3339_opts(chrono::SecondsFormat::Secs, true),
buffer_depth: stats.current_depth,
underruns: stats.underruns,
overruns: stats.overruns,
loss_pct,
rtt_ms: 0,
jitter_ms: 0,
frames_sent: 0,
frames_received: stats.total_decoded,
quality_profile: profile.to_string(),
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_test_stats() -> JitterStats {
JitterStats {
packets_received: 100,
packets_played: 95,
packets_lost: 5,
packets_late: 2,
packets_duplicate: 0,
current_depth: 8,
total_decoded: 93,
underruns: 1,
overruns: 0,
max_depth_seen: 12,
}
}
#[test]
fn snapshot_serializes_to_json() {
let stats = make_test_stats();
let snap = snapshot_from_stats(&stats, "GOOD");
let json = serde_json::to_string(&snap).unwrap();
// Verify expected fields are present in the JSON string.
assert!(json.contains("\"ts\""));
assert!(json.contains("\"buffer_depth\":8"));
assert!(json.contains("\"underruns\":1"));
assert!(json.contains("\"overruns\":0"));
assert!(json.contains("\"loss_pct\":5."));
assert!(json.contains("\"rtt_ms\":0"));
assert!(json.contains("\"jitter_ms\":0"));
assert!(json.contains("\"frames_sent\":0"));
assert!(json.contains("\"frames_received\":93"));
assert!(json.contains("\"quality_profile\":\"GOOD\""));
// Verify it round-trips as valid JSON.
let value: serde_json::Value = serde_json::from_str(&json).unwrap();
assert_eq!(value["buffer_depth"], 8);
assert_eq!(value["quality_profile"], "GOOD");
}
#[test]
fn metrics_writer_creates_file() {
let dir = std::env::temp_dir();
let path = dir.join("wzp_metrics_test.jsonl");
let path_str = path.to_str().unwrap();
let mut writer = MetricsWriter::new(path_str, 1).unwrap();
let stats = make_test_stats();
let snap = snapshot_from_stats(&stats, "DEGRADED");
let wrote = writer.maybe_write(&snap).unwrap();
assert!(wrote, "first write should succeed immediately");
// Read the file back and verify it contains valid JSONL.
let contents = std::fs::read_to_string(&path).unwrap();
let lines: Vec<&str> = contents.lines().collect();
assert_eq!(lines.len(), 1, "should have exactly one JSONL line");
let value: serde_json::Value = serde_json::from_str(lines[0]).unwrap();
assert_eq!(value["quality_profile"], "DEGRADED");
assert_eq!(value["buffer_depth"], 8);
// Clean up.
let _ = std::fs::remove_file(&path);
}
#[test]
fn metrics_writer_respects_interval() {
let dir = std::env::temp_dir();
let path = dir.join("wzp_metrics_interval_test.jsonl");
let path_str = path.to_str().unwrap();
let mut writer = MetricsWriter::new(path_str, 60).unwrap();
let stats = make_test_stats();
let snap = snapshot_from_stats(&stats, "GOOD");
// First write succeeds (last_write is set far in the past).
let first = writer.maybe_write(&snap).unwrap();
assert!(first, "first write should succeed");
// Immediate second write should be skipped (60s interval).
let second = writer.maybe_write(&snap).unwrap();
assert!(!second, "second write should be skipped — interval not elapsed");
// Clean up.
let _ = std::fs::remove_file(&path);
}
}

View File

@@ -0,0 +1,254 @@
//! Parameter sweep tool for jitter buffer configurations.
//!
//! Tests different (target_depth, max_depth) combinations in a local
//! encoder-to-decoder pipeline (no network) and reports frame loss,
//! estimated latency, underruns, and overruns for each configuration.
use crate::call::{CallConfig, CallDecoder, CallEncoder};
use wzp_proto::QualityProfile;
const FRAME_SAMPLES: usize = 960; // 20ms @ 48kHz
const SAMPLE_RATE: u32 = 48_000;
const FRAME_DURATION_MS: u32 = 20;
/// Configuration for a parameter sweep.
pub struct SweepConfig {
/// Target jitter buffer depths to test (in packets).
pub target_depths: Vec<usize>,
/// Maximum jitter buffer depths to test (in packets).
pub max_depths: Vec<usize>,
/// Duration in seconds to run each configuration.
pub test_duration_secs: u32,
/// Frequency of the test tone in Hz.
pub tone_freq_hz: f32,
}
impl Default for SweepConfig {
fn default() -> Self {
Self {
target_depths: vec![10, 25, 50, 100, 200],
max_depths: vec![50, 100, 250, 500],
test_duration_secs: 2,
tone_freq_hz: 440.0,
}
}
}
/// Result from one (target_depth, max_depth) configuration.
#[derive(Debug, Clone)]
pub struct SweepResult {
/// Jitter buffer target depth used.
pub target_depth: usize,
/// Jitter buffer max depth used.
pub max_depth: usize,
/// Total frames sent into the encoder.
pub frames_sent: u64,
/// Total frames successfully decoded.
pub frames_received: u64,
/// Frame loss percentage.
pub loss_pct: f64,
/// Estimated latency in ms (target_depth * frame_duration).
pub avg_latency_ms: f64,
/// Number of jitter buffer underruns.
pub underruns: u64,
/// Number of jitter buffer overruns (packets dropped due to full buffer).
pub overruns: u64,
}
/// Generate a sine wave frame at the given frequency and frame offset.
fn sine_frame(freq_hz: f32, frame_offset: u64) -> Vec<i16> {
let start = frame_offset * FRAME_SAMPLES as u64;
(0..FRAME_SAMPLES)
.map(|i| {
let t = (start + i as u64) as f32 / SAMPLE_RATE as f32;
(f32::sin(2.0 * std::f32::consts::PI * freq_hz * t) * 16000.0) as i16
})
.collect()
}
/// Run a local parameter sweep (no network).
///
/// For each (target_depth, max_depth) combination, creates an encoder and
/// decoder, pushes frames through the pipeline, and collects statistics.
/// Combinations where `target_depth > max_depth` are skipped.
pub fn run_local_sweep(config: &SweepConfig) -> Vec<SweepResult> {
let frames_per_config =
(config.test_duration_secs as u64) * (1000 / FRAME_DURATION_MS as u64);
let mut results = Vec::new();
for &target in &config.target_depths {
for &max in &config.max_depths {
// Skip invalid combinations where target exceeds max.
if target > max {
continue;
}
let call_cfg = CallConfig {
profile: QualityProfile::GOOD,
jitter_target: target,
jitter_max: max,
jitter_min: target.min(3).max(1),
..Default::default()
};
let mut encoder = CallEncoder::new(&call_cfg);
let mut decoder = CallDecoder::new(&call_cfg);
let mut pcm_out = vec![0i16; FRAME_SAMPLES];
let mut frames_decoded = 0u64;
for frame_idx in 0..frames_per_config {
// Encode a tone frame.
let pcm_in = sine_frame(config.tone_freq_hz, frame_idx);
let packets = match encoder.encode_frame(&pcm_in) {
Ok(p) => p,
Err(_) => continue,
};
// Feed all packets (source + repair) into the decoder.
for pkt in packets {
decoder.ingest(pkt);
}
// Attempt to decode one frame.
if decoder.decode_next(&mut pcm_out).is_some() {
frames_decoded += 1;
}
}
// Drain: keep decoding until the jitter buffer is empty.
for _ in 0..max {
if decoder.decode_next(&mut pcm_out).is_some() {
frames_decoded += 1;
} else {
break;
}
}
let stats = decoder.stats().clone();
let loss_pct = if frames_per_config > 0 {
(1.0 - frames_decoded as f64 / frames_per_config as f64) * 100.0
} else {
0.0
};
results.push(SweepResult {
target_depth: target,
max_depth: max,
frames_sent: frames_per_config,
frames_received: frames_decoded,
loss_pct: loss_pct.max(0.0),
avg_latency_ms: target as f64 * FRAME_DURATION_MS as f64,
underruns: stats.underruns,
overruns: stats.overruns,
});
}
}
results
}
/// Print a formatted ASCII table of sweep results.
pub fn print_sweep_table(results: &[SweepResult]) {
println!();
println!("=== Jitter Buffer Parameter Sweep ===");
println!();
println!(
" {:>6} | {:>4} | {:>6} | {:>6} | {:>6} | {:>10} | {:>9} | {:>8}",
"target", "max", "sent", "recv", "loss%", "latency_ms", "underruns", "overruns"
);
println!(
" {:-<6}-+-{:-<4}-+-{:-<6}-+-{:-<6}-+-{:-<6}-+-{:-<10}-+-{:-<9}-+-{:-<8}",
"", "", "", "", "", "", "", ""
);
for r in results {
println!(
" {:>6} | {:>4} | {:>6} | {:>6} | {:>5.1}% | {:>10.0} | {:>9} | {:>8}",
r.target_depth,
r.max_depth,
r.frames_sent,
r.frames_received,
r.loss_pct,
r.avg_latency_ms,
r.underruns,
r.overruns,
);
}
println!();
}
/// Run a default sweep and print the results.
///
/// This is the entry point for the `--sweep` CLI flag.
pub fn run_and_print_default_sweep() {
let config = SweepConfig::default();
let results = run_local_sweep(&config);
print_sweep_table(&results);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn sweep_config_default() {
let cfg = SweepConfig::default();
assert_eq!(cfg.target_depths.len(), 5);
assert_eq!(cfg.max_depths.len(), 4);
assert!(cfg.test_duration_secs > 0);
assert!(cfg.tone_freq_hz > 0.0);
// All default targets should be positive.
assert!(cfg.target_depths.iter().all(|&d| d > 0));
assert!(cfg.max_depths.iter().all(|&d| d > 0));
}
#[test]
fn local_sweep_runs() {
let cfg = SweepConfig {
target_depths: vec![3, 10],
max_depths: vec![50, 100],
test_duration_secs: 1,
tone_freq_hz: 440.0,
};
let results = run_local_sweep(&cfg);
// 2 targets x 2 maxes = 4 configs (all valid since targets < maxes).
assert_eq!(results.len(), 4);
for r in &results {
assert!(r.frames_sent > 0, "frames_sent should be > 0");
assert!(r.frames_received > 0, "frames_received should be > 0");
assert!(r.avg_latency_ms > 0.0, "latency should be > 0");
}
}
#[test]
fn sweep_table_formats() {
// Verify print_sweep_table doesn't panic with various inputs.
print_sweep_table(&[]);
let results = vec![
SweepResult {
target_depth: 10,
max_depth: 50,
frames_sent: 100,
frames_received: 98,
loss_pct: 2.0,
avg_latency_ms: 200.0,
underruns: 2,
overruns: 0,
},
SweepResult {
target_depth: 25,
max_depth: 100,
frames_sent: 100,
frames_received: 100,
loss_pct: 0.0,
avg_latency_ms: 500.0,
underruns: 0,
overruns: 0,
},
];
print_sweep_table(&results);
}
}

View File

@@ -16,4 +16,10 @@ audiopus = { workspace = true }
# Pure-Rust Codec2 implementation # Pure-Rust Codec2 implementation
codec2 = { workspace = true } codec2 = { workspace = true }
# RNG for comfort noise generation
rand = { workspace = true }
# ML-based noise suppression (pure-Rust port of RNNoise)
nnnoiseless = "0.5"
[dev-dependencies] [dev-dependencies]

View File

@@ -0,0 +1,183 @@
//! ML-based noise suppression using nnnoiseless (pure-Rust RNNoise port).
//!
//! RNNoise operates on 480-sample frames at 48 kHz (10 ms). Our codec pipeline
//! uses 960-sample frames (20 ms), so each call processes two halves.
use nnnoiseless::DenoiseState;
/// Wraps [`DenoiseState`] to provide noise suppression on 960-sample (20 ms) PCM
/// frames at 48 kHz.
pub struct NoiseSupressor {
state: Box<DenoiseState<'static>>,
enabled: bool,
}
impl NoiseSupressor {
/// Create a new noise suppressor (enabled by default).
pub fn new() -> Self {
Self {
state: DenoiseState::new(),
enabled: true,
}
}
/// Process a 960-sample frame of 48 kHz mono PCM **in place**.
///
/// nnnoiseless expects f32 samples in the range roughly [-32768, 32767].
/// We convert i16 → f32, process two 480-sample halves, then convert back.
pub fn process(&mut self, pcm: &mut [i16]) {
if !self.enabled {
return;
}
debug_assert!(
pcm.len() >= 960,
"NoiseSupressor::process expects at least 960 samples, got {}",
pcm.len()
);
// Process in two 480-sample halves.
for half in 0..2 {
let offset = half * 480;
let end = offset + 480;
if end > pcm.len() {
break;
}
// i16 → f32
let mut float_buf = [0.0f32; 480];
for (i, &sample) in pcm[offset..end].iter().enumerate() {
float_buf[i] = sample as f32;
}
// nnnoiseless processes in-place, returns VAD probability (unused here).
let mut output = [0.0f32; 480];
let _vad = self.state.process_frame(&mut output, &float_buf);
// f32 → i16 with clamping
for (i, &val) in output.iter().enumerate() {
let clamped = val.max(-32768.0).min(32767.0);
pcm[offset + i] = clamped as i16;
}
}
}
/// Enable or disable noise suppression.
pub fn set_enabled(&mut self, enabled: bool) {
self.enabled = enabled;
}
/// Returns `true` if noise suppression is currently enabled.
pub fn is_enabled(&self) -> bool {
self.enabled
}
}
impl Default for NoiseSupressor {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn denoiser_creates() {
let ns = NoiseSupressor::new();
assert!(ns.is_enabled());
}
#[test]
fn denoiser_processes_frame() {
let mut ns = NoiseSupressor::new();
let mut pcm = vec![0i16; 960];
// Fill with a simple pattern so we have something to process.
for (i, s) in pcm.iter_mut().enumerate() {
*s = ((i % 100) as i16).wrapping_mul(100);
}
let original_len = pcm.len();
ns.process(&mut pcm);
assert_eq!(pcm.len(), original_len, "output length must match input length");
}
#[test]
fn denoiser_reduces_noise() {
let mut ns = NoiseSupressor::new();
// Generate a 440 Hz sine tone + white noise at 48 kHz.
// We need multiple frames for the RNN to converge.
let sample_rate = 48000.0f64;
let freq = 440.0f64;
let amplitude = 10000.0f64;
let noise_amplitude = 3000.0f64;
// Use a simple PRNG for reproducibility.
let mut rng_state: u32 = 12345;
let mut next_noise = || -> f64 {
// xorshift32
rng_state ^= rng_state << 13;
rng_state ^= rng_state >> 17;
rng_state ^= rng_state << 5;
// Map to [-1, 1]
(rng_state as f64 / u32::MAX as f64) * 2.0 - 1.0
};
// Feed several frames to let the RNN warm up, then measure the last one.
let num_warmup_frames = 20;
let mut last_input = vec![0i16; 960];
let mut last_output = vec![0i16; 960];
for frame_idx in 0..=num_warmup_frames {
let mut pcm = vec![0i16; 960];
for (i, s) in pcm.iter_mut().enumerate() {
let t = (frame_idx * 960 + i) as f64 / sample_rate;
let sine = amplitude * (2.0 * std::f64::consts::PI * freq * t).sin();
let noise = noise_amplitude * next_noise();
*s = (sine + noise).max(-32768.0).min(32767.0) as i16;
}
if frame_idx == num_warmup_frames {
last_input = pcm.clone();
}
ns.process(&mut pcm);
if frame_idx == num_warmup_frames {
last_output = pcm;
}
}
// Compute RMS of input and output.
let rms = |buf: &[i16]| -> f64 {
let sum: f64 = buf.iter().map(|&s| (s as f64) * (s as f64)).sum();
(sum / buf.len() as f64).sqrt()
};
let input_rms = rms(&last_input);
let output_rms = rms(&last_output);
// The denoiser should not amplify the signal beyond input.
// More importantly, the output should have measurably lower noise.
// We verify the output RMS is less than the input RMS (noise was reduced).
assert!(
output_rms < input_rms,
"expected output RMS ({output_rms:.1}) < input RMS ({input_rms:.1}); \
denoiser should reduce noise"
);
}
#[test]
fn denoiser_passthrough_when_disabled() {
let mut ns = NoiseSupressor::new();
ns.set_enabled(false);
assert!(!ns.is_enabled());
let original: Vec<i16> = (0..960).map(|i| (i * 10) as i16).collect();
let mut pcm = original.clone();
ns.process(&mut pcm);
assert_eq!(pcm, original, "disabled denoiser must not alter input");
}
}

View File

@@ -12,11 +12,15 @@
pub mod adaptive; pub mod adaptive;
pub mod codec2_dec; pub mod codec2_dec;
pub mod codec2_enc; pub mod codec2_enc;
pub mod denoise;
pub mod opus_dec; pub mod opus_dec;
pub mod opus_enc; pub mod opus_enc;
pub mod resample; pub mod resample;
pub mod silence;
pub use adaptive::{AdaptiveDecoder, AdaptiveEncoder}; pub use adaptive::{AdaptiveDecoder, AdaptiveEncoder};
pub use denoise::NoiseSupressor;
pub use silence::{ComfortNoise, SilenceDetector};
pub use wzp_proto::{AudioDecoder, AudioEncoder, CodecId, QualityProfile}; pub use wzp_proto::{AudioDecoder, AudioEncoder, CodecId, QualityProfile};
/// Create an adaptive encoder starting at the given quality profile. /// Create an adaptive encoder starting at the given quality profile.

View File

@@ -0,0 +1,191 @@
//! Silence suppression and comfort noise generation.
//!
//! During silent periods (~50% of a typical call), full encoded frames waste
//! bandwidth. [`SilenceDetector`] detects silent audio based on RMS energy,
//! and [`ComfortNoise`] generates low-level background noise to fill gaps on
//! the decoder side.
use rand::Rng;
/// Detects silence in PCM audio using RMS energy with a hangover period.
///
/// The hangover prevents clipping the onset of speech: after silence is first
/// detected, the detector continues reporting "not silent" for `hangover_frames`
/// additional frames before transitioning to suppression.
pub struct SilenceDetector {
/// RMS threshold below which audio is considered silent (for i16 samples).
threshold_rms: f64,
/// Number of frames to keep sending after silence starts (prevents speech clipping).
hangover_frames: u32,
/// Count of consecutive frames whose RMS is below the threshold.
silent_frames: u32,
/// Whether suppression is currently active.
is_suppressing: bool,
}
impl SilenceDetector {
/// Create a new silence detector.
///
/// * `threshold_rms` — RMS energy below which a frame is silent (default: 100.0 for i16).
/// * `hangover_frames` — frames to keep sending after silence onset (default: 5 = 100ms at 20ms frames).
pub fn new(threshold_rms: f64, hangover_frames: u32) -> Self {
Self {
threshold_rms,
hangover_frames,
silent_frames: 0,
is_suppressing: false,
}
}
/// Compute the RMS (root mean square) energy of a PCM buffer.
pub fn rms(pcm: &[i16]) -> f64 {
if pcm.is_empty() {
return 0.0;
}
let sum_sq: f64 = pcm.iter().map(|&s| (s as f64) * (s as f64)).sum();
(sum_sq / pcm.len() as f64).sqrt()
}
/// Returns `true` if the frame should be suppressed (i.e. is silence past
/// the hangover period).
///
/// Call once per frame. The detector tracks consecutive silent frames
/// internally and only reports suppression after the hangover expires.
pub fn is_silent(&mut self, pcm: &[i16]) -> bool {
let energy = Self::rms(pcm);
if energy < self.threshold_rms {
self.silent_frames = self.silent_frames.saturating_add(1);
if self.silent_frames > self.hangover_frames {
self.is_suppressing = true;
}
} else {
// Speech detected — reset.
self.silent_frames = 0;
self.is_suppressing = false;
}
self.is_suppressing
}
/// Whether the detector is currently in the suppressing state.
pub fn suppressing(&self) -> bool {
self.is_suppressing
}
}
/// Generates low-level comfort noise to fill silent periods.
///
/// When the decoder receives a comfort-noise descriptor (or detects a gap
/// caused by silence suppression), it uses this to produce a natural-sounding
/// background hiss instead of dead silence.
pub struct ComfortNoise {
/// Peak amplitude of the generated noise (default: 50).
level: i16,
}
impl ComfortNoise {
/// Create a comfort noise generator with the given amplitude level.
pub fn new(level: i16) -> Self {
Self { level }
}
/// Fill `pcm` with low-level random noise in the range `[-level, level]`.
pub fn generate(&self, pcm: &mut [i16]) {
let mut rng = rand::thread_rng();
for sample in pcm.iter_mut() {
*sample = rng.gen_range(-self.level..=self.level);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn silence_detector_detects_silence() {
let mut det = SilenceDetector::new(100.0, 5);
let silence = vec![0i16; 960];
// First 5 frames are hangover — should NOT suppress yet.
for _ in 0..5 {
assert!(!det.is_silent(&silence));
}
// Frame 6 onward: past hangover, should suppress.
assert!(det.is_silent(&silence));
assert!(det.is_silent(&silence));
}
#[test]
fn silence_detector_detects_speech() {
let mut det = SilenceDetector::new(100.0, 5);
// Generate a 1kHz sine wave at decent amplitude.
let pcm: Vec<i16> = (0..960)
.map(|i| {
let t = i as f64 / 48000.0;
(10000.0 * (2.0 * std::f64::consts::PI * 1000.0 * t).sin()) as i16
})
.collect();
// Should never report silent.
for _ in 0..20 {
assert!(!det.is_silent(&pcm));
}
}
#[test]
fn silence_detector_hangover() {
let mut det = SilenceDetector::new(100.0, 3);
let silence = vec![0i16; 960];
let speech: Vec<i16> = (0..960)
.map(|i| {
let t = i as f64 / 48000.0;
(5000.0 * (2.0 * std::f64::consts::PI * 440.0 * t).sin()) as i16
})
.collect();
// Feed silence past hangover to enter suppression.
for _ in 0..4 {
det.is_silent(&silence);
}
assert!(det.is_silent(&silence), "should be suppressing after hangover");
// Speech arrives — should immediately stop suppressing.
assert!(!det.is_silent(&speech));
assert!(!det.is_silent(&speech));
}
#[test]
fn comfort_noise_generates_nonzero() {
let cn = ComfortNoise::new(50);
let mut pcm = vec![0i16; 960];
cn.generate(&mut pcm);
// At least some samples should be non-zero.
assert!(pcm.iter().any(|&s| s != 0), "CN output should not be all zeros");
// All samples should be within [-50, 50].
assert!(pcm.iter().all(|&s| s.abs() <= 50), "CN samples out of range");
}
#[test]
fn rms_calculation() {
// All zeros → RMS 0.
assert_eq!(SilenceDetector::rms(&[0i16; 100]), 0.0);
// Constant value: RMS of [v, v, v, ...] = |v|.
let pcm = vec![100i16; 100];
let rms = SilenceDetector::rms(&pcm);
assert!((rms - 100.0).abs() < 0.01, "RMS of constant 100 should be 100, got {rms}");
// Known pattern: [3, 4] → sqrt((9+16)/2) = sqrt(12.5) ≈ 3.5355
let rms2 = SilenceDetector::rms(&[3, 4]);
assert!((rms2 - 3.5355).abs() < 0.01, "RMS of [3,4] should be ~3.5355, got {rms2}");
// Empty buffer → 0.
assert_eq!(SilenceDetector::rms(&[]), 0.0);
}
}

View File

@@ -15,5 +15,18 @@ hkdf = { workspace = true }
sha2 = { workspace = true } sha2 = { workspace = true }
rand = { workspace = true } rand = { workspace = true }
tracing = { workspace = true } tracing = { workspace = true }
bip39 = "2"
hex = "0.4"
# featherChat identity — the source of truth for Seed, IdentityKeyPair, Fingerprint
warzone-protocol = { path = "../../deps/featherchat/warzone/crates/warzone-protocol" }
[dev-dependencies] [dev-dependencies]
ed25519-dalek = { workspace = true }
warzone-protocol = { path = "../../deps/featherchat/warzone/crates/warzone-protocol" }
wzp-proto = { workspace = true }
wzp-client = { path = "../wzp-client" }
wzp-relay = { path = "../wzp-relay" }
serde_json = "1"
serde = { workspace = true }
bincode = "1"

View File

@@ -0,0 +1,281 @@
//! featherChat-compatible identity module.
//!
//! Mirrors `warzone-protocol/src/identity.rs` and `warzone-protocol/src/mnemonic.rs`
//! from featherChat. Same seed → same keys → same fingerprint in both codebases.
//!
//! Source of truth: deps/featherchat/warzone/crates/warzone-protocol/src/identity.rs
use ed25519_dalek::{SigningKey, VerifyingKey};
use hkdf::Hkdf;
use sha2::{Digest, Sha256};
use x25519_dalek::StaticSecret;
/// The root secret — 32 bytes from which all keys are derived.
/// Displayed to users as a BIP39 mnemonic (24 words).
///
/// Mirrors: `warzone-protocol::identity::Seed`
pub struct Seed(pub [u8; 32]);
impl Seed {
/// Generate a new random seed.
pub fn generate() -> Self {
let mut bytes = [0u8; 32];
rand::RngCore::fill_bytes(&mut rand::rngs::OsRng, &mut bytes);
Seed(bytes)
}
/// Create seed from raw bytes.
pub fn from_bytes(bytes: [u8; 32]) -> Self {
Seed(bytes)
}
/// Create seed from hex string (64 hex chars).
pub fn from_hex(hex_str: &str) -> Result<Self, String> {
let bytes = hex::decode(hex_str).map_err(|e| format!("invalid hex: {e}"))?;
if bytes.len() != 32 {
return Err(format!("expected 32 bytes, got {}", bytes.len()));
}
let mut seed = [0u8; 32];
seed.copy_from_slice(&bytes);
Ok(Seed(seed))
}
/// Derive the full identity keypair from this seed.
///
/// Uses identical HKDF derivation as featherChat:
/// - Ed25519: `HKDF(seed, salt=None, info="warzone-ed25519")`
/// - X25519: `HKDF(seed, salt=None, info="warzone-x25519")`
pub fn derive_identity(&self) -> IdentityKeyPair {
let hk = Hkdf::<Sha256>::new(None, &self.0);
let mut ed_bytes = [0u8; 32];
hk.expand(b"warzone-ed25519", &mut ed_bytes)
.expect("HKDF expand for Ed25519");
let signing = SigningKey::from_bytes(&ed_bytes);
ed_bytes.fill(0);
let mut x_bytes = [0u8; 32];
hk.expand(b"warzone-x25519", &mut x_bytes)
.expect("HKDF expand for X25519");
let encryption = StaticSecret::from(x_bytes);
x_bytes.fill(0);
IdentityKeyPair {
signing,
encryption,
}
}
/// Convert to BIP39 mnemonic (24 words).
///
/// Mirrors: `warzone-protocol::mnemonic::seed_to_mnemonic`
pub fn to_mnemonic(&self) -> String {
let mnemonic =
bip39::Mnemonic::from_entropy(&self.0).expect("32 bytes is valid BIP39 entropy");
mnemonic.to_string()
}
/// Recover seed from BIP39 mnemonic (24 words).
///
/// Mirrors: `warzone-protocol::mnemonic::mnemonic_to_seed`
pub fn from_mnemonic(words: &str) -> Result<Self, String> {
let mnemonic: bip39::Mnemonic = words.parse().map_err(|e| format!("invalid mnemonic: {e}"))?;
let entropy = mnemonic.to_entropy();
if entropy.len() != 32 {
return Err(format!("expected 32 bytes entropy, got {}", entropy.len()));
}
let mut seed = [0u8; 32];
seed.copy_from_slice(&entropy);
Ok(Seed(seed))
}
}
impl Drop for Seed {
fn drop(&mut self) {
self.0.fill(0); // zeroize on drop
}
}
/// The full identity keypair derived from a seed.
///
/// Mirrors: `warzone-protocol::identity::IdentityKeyPair`
pub struct IdentityKeyPair {
pub signing: SigningKey,
pub encryption: StaticSecret,
}
impl IdentityKeyPair {
/// Get the public identity (safe to share).
pub fn public_identity(&self) -> PublicIdentity {
let verifying = self.signing.verifying_key();
let encryption_pub = x25519_dalek::PublicKey::from(&self.encryption);
let fingerprint = Fingerprint::from_verifying_key(&verifying);
PublicIdentity {
signing: verifying,
encryption: encryption_pub,
fingerprint,
}
}
}
/// Truncated SHA-256 hash of the Ed25519 public key (16 bytes).
/// Displayed as `xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx`.
///
/// Mirrors: `warzone-protocol::types::Fingerprint`
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
pub struct Fingerprint(pub [u8; 16]);
impl Fingerprint {
pub fn from_verifying_key(key: &VerifyingKey) -> Self {
let hash = Sha256::digest(key.as_bytes());
let mut fp = [0u8; 16];
fp.copy_from_slice(&hash[..16]);
Fingerprint(fp)
}
/// Parse from hex string (with or without colons).
pub fn from_hex(s: &str) -> Result<Self, String> {
let clean: String = s.chars().filter(|c| c.is_ascii_hexdigit()).collect();
let bytes = hex::decode(&clean).map_err(|e| format!("invalid hex: {e}"))?;
if bytes.len() < 16 {
return Err("fingerprint too short".to_string());
}
let mut fp = [0u8; 16];
fp.copy_from_slice(&bytes[..16]);
Ok(Fingerprint(fp))
}
/// As raw bytes.
pub fn as_bytes(&self) -> &[u8; 16] {
&self.0
}
/// As hex string without colons.
pub fn to_hex(&self) -> String {
hex::encode(self.0)
}
}
impl std::fmt::Display for Fingerprint {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{:04x}:{:04x}:{:04x}:{:04x}:{:04x}:{:04x}:{:04x}:{:04x}",
u16::from_be_bytes([self.0[0], self.0[1]]),
u16::from_be_bytes([self.0[2], self.0[3]]),
u16::from_be_bytes([self.0[4], self.0[5]]),
u16::from_be_bytes([self.0[6], self.0[7]]),
u16::from_be_bytes([self.0[8], self.0[9]]),
u16::from_be_bytes([self.0[10], self.0[11]]),
u16::from_be_bytes([self.0[12], self.0[13]]),
u16::from_be_bytes([self.0[14], self.0[15]]),
)
}
}
impl std::fmt::Debug for Fingerprint {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Fingerprint({})", self)
}
}
/// The public portion of an identity — safe to share with anyone.
pub struct PublicIdentity {
pub signing: VerifyingKey,
pub encryption: x25519_dalek::PublicKey,
pub fingerprint: Fingerprint,
}
/// Hash a human-readable room/group name into an opaque hex string.
/// Used as QUIC SNI to prevent leaking group names to network observers.
///
/// `hash_room_name("my-group")` → 32 hex chars (16 bytes of SHA-256).
///
/// Mirrors the convention in featherChat WZP-FC-5:
/// `SHA-256("featherchat-group:" + group_name)[:16]`
pub fn hash_room_name(group_name: &str) -> String {
use sha2::{Digest, Sha256};
let mut hasher = Sha256::new();
hasher.update(b"featherchat-group:");
hasher.update(group_name.as_bytes());
let hash = hasher.finalize();
hex::encode(&hash[..16])
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn deterministic_derivation() {
let seed = Seed::from_bytes([42u8; 32]);
let id1 = seed.derive_identity();
let id2 = seed.derive_identity();
assert_eq!(
id1.signing.verifying_key().as_bytes(),
id2.signing.verifying_key().as_bytes(),
);
}
#[test]
fn mnemonic_roundtrip() {
let seed = Seed::generate();
let words = seed.to_mnemonic();
let word_count = words.split_whitespace().count();
assert_eq!(word_count, 24);
let recovered = Seed::from_mnemonic(&words).unwrap();
assert_eq!(seed.0, recovered.0);
}
#[test]
fn hex_roundtrip() {
let seed = Seed::generate();
let hex_str = hex::encode(seed.0);
let recovered = Seed::from_hex(&hex_str).unwrap();
assert_eq!(seed.0, recovered.0);
}
#[test]
fn fingerprint_format() {
let seed = Seed::generate();
let id = seed.derive_identity();
let pub_id = id.public_identity();
let fp_str = pub_id.fingerprint.to_string();
// Format: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
assert_eq!(fp_str.len(), 39);
assert_eq!(fp_str.chars().filter(|c| *c == ':').count(), 7);
}
#[test]
fn hash_room_name_deterministic() {
let h1 = hash_room_name("my-group");
let h2 = hash_room_name("my-group");
assert_eq!(h1, h2);
assert_eq!(h1.len(), 32); // 16 bytes = 32 hex chars
assert!(h1.chars().all(|c| c.is_ascii_hexdigit()));
}
#[test]
fn hash_room_name_different_inputs() {
assert_ne!(hash_room_name("alpha"), hash_room_name("beta"));
}
#[test]
fn matches_handshake_derivation() {
use wzp_proto::KeyExchange;
// Verify identity module matches the KeyExchange trait implementation
let seed = [99u8; 32];
let id = Seed::from_bytes(seed).derive_identity();
let kx = crate::WarzoneKeyExchange::from_identity_seed(&seed);
assert_eq!(
id.signing.verifying_key().as_bytes(),
&kx.identity_public_key(),
);
assert_eq!(
id.public_identity().fingerprint.as_bytes(),
&kx.fingerprint(),
);
}
}

View File

@@ -9,12 +9,14 @@
pub mod anti_replay; pub mod anti_replay;
pub mod handshake; pub mod handshake;
pub mod identity;
pub mod nonce; pub mod nonce;
pub mod rekey; pub mod rekey;
pub mod session; pub mod session;
pub use anti_replay::AntiReplayWindow; pub use anti_replay::AntiReplayWindow;
pub use handshake::WarzoneKeyExchange; pub use handshake::WarzoneKeyExchange;
pub use identity::{hash_room_name, Fingerprint, IdentityKeyPair, PublicIdentity, Seed};
pub use nonce::{build_nonce, Direction}; pub use nonce::{build_nonce, Direction};
pub use rekey::RekeyManager; pub use rekey::RekeyManager;
pub use session::ChaChaSession; pub use session::ChaChaSession;

View File

@@ -0,0 +1,571 @@
//! Cross-project compatibility tests between WZP and featherChat.
//!
//! Verifies:
//! 1. Identity: same seed → same keys → same fingerprints (WZP-FC-8)
//! 2. CallSignal: WZP SignalMessage serializes into FC CallSignal.payload correctly
//! 3. Auth: WZP auth module request/response matches FC's /v1/auth/validate contract
//! 4. Mnemonic: BIP39 interop between both implementations
use wzp_proto::KeyExchange;
// ─── Identity Compatibility (WZP-FC-8) ──────────────────────────────────────
#[test]
fn same_seed_same_ed25519_key() {
let seed = [42u8; 32];
let wzp_kx = wzp_crypto::WarzoneKeyExchange::from_identity_seed(&seed);
let wzp_pub = wzp_kx.identity_public_key();
let fc_seed = warzone_protocol::identity::Seed::from_bytes(seed);
let fc_id = fc_seed.derive_identity();
let fc_pub = fc_id.signing.verifying_key();
assert_eq!(&wzp_pub, fc_pub.as_bytes(), "Ed25519 keys must match");
}
#[test]
fn same_seed_same_fingerprint() {
let seed = [99u8; 32];
let wzp_kx = wzp_crypto::WarzoneKeyExchange::from_identity_seed(&seed);
let wzp_fp = wzp_kx.fingerprint();
let fc_seed = warzone_protocol::identity::Seed::from_bytes(seed);
let fc_fp = fc_seed.derive_identity().public_identity().fingerprint.0;
assert_eq!(wzp_fp, fc_fp, "Fingerprints must match");
}
#[test]
fn wzp_identity_module_matches_featherchat() {
let seed = [0xAB; 32];
let wzp_pub = wzp_crypto::Seed::from_bytes(seed)
.derive_identity()
.public_identity();
let fc_pub = warzone_protocol::identity::Seed::from_bytes(seed)
.derive_identity()
.public_identity();
assert_eq!(wzp_pub.signing.as_bytes(), fc_pub.signing.as_bytes());
assert_eq!(wzp_pub.encryption.as_bytes(), fc_pub.encryption.as_bytes());
assert_eq!(wzp_pub.fingerprint.0, fc_pub.fingerprint.0);
assert_eq!(wzp_pub.fingerprint.to_string(), fc_pub.fingerprint.to_string());
}
#[test]
fn random_seed_identity_match() {
let fc_seed = warzone_protocol::identity::Seed::generate();
let raw = fc_seed.0;
let fc_fp = fc_seed.derive_identity().public_identity().fingerprint.0;
let wzp_fp = wzp_crypto::WarzoneKeyExchange::from_identity_seed(&raw).fingerprint();
assert_eq!(wzp_fp, fc_fp);
}
#[test]
fn hkdf_derive_matches() {
let seed = [0x55; 32];
let fc_ed = warzone_protocol::crypto::hkdf_derive(&seed, b"", b"warzone-ed25519", 32);
let fc_signing = ed25519_dalek::SigningKey::from_bytes(&fc_ed.try_into().unwrap());
let fc_pub = fc_signing.verifying_key();
let wzp_pub = wzp_crypto::WarzoneKeyExchange::from_identity_seed(&seed).identity_public_key();
assert_eq!(&wzp_pub, fc_pub.as_bytes());
}
// ─── BIP39 Mnemonic Interop ─────────────────────────────────────────────────
#[test]
fn mnemonic_roundtrip_fc_to_wzp() {
let seed = [0x77; 32];
let fc_mnemonic = warzone_protocol::identity::Seed::from_bytes(seed).to_mnemonic();
let wzp_recovered = wzp_crypto::Seed::from_mnemonic(&fc_mnemonic).unwrap();
assert_eq!(wzp_recovered.0, seed);
}
#[test]
fn mnemonic_roundtrip_wzp_to_fc() {
let seed = [0x33; 32];
let wzp_mnemonic = wzp_crypto::Seed::from_bytes(seed).to_mnemonic();
let fc_recovered = warzone_protocol::identity::Seed::from_mnemonic(&wzp_mnemonic).unwrap();
assert_eq!(fc_recovered.0, seed);
}
#[test]
fn mnemonic_strings_identical() {
let seed = [0xDE; 32];
let fc_words = warzone_protocol::identity::Seed::from_bytes(seed).to_mnemonic();
let wzp_words = wzp_crypto::Seed::from_bytes(seed).to_mnemonic();
assert_eq!(fc_words, wzp_words);
}
// ─── CallSignal Payload Interop ─────────────────────────────────────────────
#[test]
fn wzp_signal_serializes_into_fc_callsignal_payload() {
// WZP creates a CallOffer SignalMessage
let offer = wzp_proto::SignalMessage::CallOffer {
identity_pub: [1u8; 32],
ephemeral_pub: [2u8; 32],
signature: vec![3u8; 64],
supported_profiles: vec![wzp_proto::QualityProfile::GOOD],
};
// Encode as featherChat CallSignal payload
let payload = wzp_client::featherchat::encode_call_payload(
&offer,
Some("relay.example.com:4433"),
Some("myroom"),
);
// Verify it's valid JSON
let parsed: serde_json::Value = serde_json::from_str(&payload).unwrap();
assert!(parsed.get("signal").is_some());
assert_eq!(parsed["relay_addr"], "relay.example.com:4433");
assert_eq!(parsed["room"], "myroom");
// featherChat would put this in WireMessage::CallSignal { payload, ... }
// Verify the FC side can create a CallSignal with this payload
let fc_msg = warzone_protocol::message::WireMessage::CallSignal {
id: "call-123".to_string(),
sender_fingerprint: "abcd1234".to_string(),
signal_type: warzone_protocol::message::CallSignalType::Offer,
payload: payload.clone(),
target: "peer-fingerprint".to_string(),
};
// Verify it serializes with bincode (FC's wire format)
let encoded = bincode::serialize(&fc_msg).unwrap();
assert!(!encoded.is_empty());
// And deserializes back
let decoded: warzone_protocol::message::WireMessage = bincode::deserialize(&encoded).unwrap();
if let warzone_protocol::message::WireMessage::CallSignal {
id, payload: p, signal_type, ..
} = decoded
{
assert_eq!(id, "call-123");
assert!(matches!(signal_type, warzone_protocol::message::CallSignalType::Offer));
// Decode the WZP payload back
let wzp_payload = wzp_client::featherchat::decode_call_payload(&p).unwrap();
assert_eq!(wzp_payload.relay_addr.unwrap(), "relay.example.com:4433");
assert!(matches!(wzp_payload.signal, wzp_proto::SignalMessage::CallOffer { .. }));
} else {
panic!("expected CallSignal");
}
}
#[test]
fn wzp_answer_round_trips_through_fc_callsignal() {
let answer = wzp_proto::SignalMessage::CallAnswer {
identity_pub: [10u8; 32],
ephemeral_pub: [20u8; 32],
signature: vec![30u8; 64],
chosen_profile: wzp_proto::QualityProfile::DEGRADED,
};
let payload = wzp_client::featherchat::encode_call_payload(&answer, None, None);
let fc_msg = warzone_protocol::message::WireMessage::CallSignal {
id: "call-456".to_string(),
sender_fingerprint: "efgh5678".to_string(),
signal_type: warzone_protocol::message::CallSignalType::Answer,
payload,
target: "caller-fp".to_string(),
};
let bytes = bincode::serialize(&fc_msg).unwrap();
let decoded: warzone_protocol::message::WireMessage = bincode::deserialize(&bytes).unwrap();
if let warzone_protocol::message::WireMessage::CallSignal { payload, .. } = decoded {
let wzp = wzp_client::featherchat::decode_call_payload(&payload).unwrap();
if let wzp_proto::SignalMessage::CallAnswer { chosen_profile, .. } = wzp.signal {
assert_eq!(chosen_profile.codec, wzp_proto::CodecId::Opus6k);
} else {
panic!("expected CallAnswer");
}
}
}
#[test]
fn wzp_hangup_round_trips_through_fc_callsignal() {
let hangup = wzp_proto::SignalMessage::Hangup {
reason: wzp_proto::HangupReason::Normal,
};
let payload = wzp_client::featherchat::encode_call_payload(&hangup, None, None);
let signal_type = wzp_client::featherchat::signal_to_call_type(&hangup);
assert!(matches!(signal_type, wzp_client::featherchat::CallSignalType::Hangup));
let fc_msg = warzone_protocol::message::WireMessage::CallSignal {
id: "call-789".to_string(),
sender_fingerprint: "xyz".to_string(),
signal_type: warzone_protocol::message::CallSignalType::Hangup,
payload,
target: "peer".to_string(),
};
let bytes = bincode::serialize(&fc_msg).unwrap();
let decoded: warzone_protocol::message::WireMessage = bincode::deserialize(&bytes).unwrap();
if let warzone_protocol::message::WireMessage::CallSignal { payload, .. } = decoded {
let wzp = wzp_client::featherchat::decode_call_payload(&payload).unwrap();
assert!(matches!(wzp.signal, wzp_proto::SignalMessage::Hangup { .. }));
}
}
// ─── Auth Token Contract ────────────────────────────────────────────────────
#[test]
fn auth_validate_request_matches_fc_contract() {
// WZP sends: { "token": "..." }
// FC expects: ValidateRequest { token: String }
let wzp_request = serde_json::json!({ "token": "test-token-123" });
let json_str = wzp_request.to_string();
// FC can deserialize this (same shape as their ValidateRequest)
#[derive(serde::Deserialize)]
struct FcValidateRequest {
token: String,
}
let fc_req: FcValidateRequest = serde_json::from_str(&json_str).unwrap();
assert_eq!(fc_req.token, "test-token-123");
}
#[test]
fn auth_validate_response_matches_wzp_expectations() {
// FC returns: { "valid": true, "fingerprint": "...", "alias": "..." }
// WZP expects: wzp_relay::auth::ValidateResponse
let fc_response = serde_json::json!({
"valid": true,
"fingerprint": "a3f8:1b2c:3d4e:5f60:7182:93a4:b5c6:d7e8",
"alias": "manwe",
"eth_address": null
});
let wzp_resp: wzp_relay::auth::ValidateResponse =
serde_json::from_value(fc_response).unwrap();
assert!(wzp_resp.valid);
assert_eq!(
wzp_resp.fingerprint.unwrap(),
"a3f8:1b2c:3d4e:5f60:7182:93a4:b5c6:d7e8"
);
assert_eq!(wzp_resp.alias.unwrap(), "manwe");
}
#[test]
fn auth_invalid_response_matches() {
let fc_response = serde_json::json!({ "valid": false });
let wzp_resp: wzp_relay::auth::ValidateResponse =
serde_json::from_value(fc_response).unwrap();
assert!(!wzp_resp.valid);
assert!(wzp_resp.fingerprint.is_none());
}
// ─── Signal Type Mapping ────────────────────────────────────────────────────
#[test]
fn all_signal_types_map_correctly() {
use wzp_client::featherchat::{signal_to_call_type, CallSignalType};
let cases: Vec<(wzp_proto::SignalMessage, &str)> = vec![
(
wzp_proto::SignalMessage::CallOffer {
identity_pub: [0; 32], ephemeral_pub: [0; 32],
signature: vec![], supported_profiles: vec![],
},
"Offer",
),
(
wzp_proto::SignalMessage::CallAnswer {
identity_pub: [0; 32], ephemeral_pub: [0; 32],
signature: vec![],
chosen_profile: wzp_proto::QualityProfile::GOOD,
},
"Answer",
),
(
wzp_proto::SignalMessage::IceCandidate {
candidate: "candidate:1".to_string(),
},
"IceCandidate",
),
(
wzp_proto::SignalMessage::Hangup {
reason: wzp_proto::HangupReason::Normal,
},
"Hangup",
),
];
for (signal, expected_name) in cases {
let ct = signal_to_call_type(&signal);
let name = format!("{ct:?}");
assert_eq!(name, expected_name, "signal type mapping for {expected_name}");
}
}
// ─── Room Hashing + Access Control ─────────────────────────────────────────
#[test]
fn hash_room_name_deterministic() {
let h1 = wzp_crypto::hash_room_name("ops-channel");
let h2 = wzp_crypto::hash_room_name("ops-channel");
assert_eq!(h1, h2, "same input must produce same hash");
}
#[test]
fn hash_room_name_is_32_hex_chars() {
let h = wzp_crypto::hash_room_name("test-room");
assert_eq!(h.len(), 32, "hash must be 32 hex chars (16 bytes)");
assert!(
h.chars().all(|c| c.is_ascii_hexdigit()),
"hash must contain only hex characters, got: {h}"
);
}
#[test]
fn hash_room_name_different_inputs() {
let h1 = wzp_crypto::hash_room_name("alpha");
let h2 = wzp_crypto::hash_room_name("beta");
let h3 = wzp_crypto::hash_room_name("alpha-2");
assert_ne!(h1, h2, "different names must produce different hashes");
assert_ne!(h1, h3);
assert_ne!(h2, h3);
}
#[test]
fn hash_room_name_matches_fc_convention() {
// Manual SHA-256("featherchat-group:" + name)[:16] using the sha2 crate directly
use sha2::{Digest, Sha256};
let name = "warzone-squad";
let mut hasher = Sha256::new();
hasher.update(b"featherchat-group:");
hasher.update(name.as_bytes());
let digest = hasher.finalize();
let expected = hex::encode(&digest[..16]);
let actual = wzp_crypto::hash_room_name(name);
assert_eq!(
actual, expected,
"hash_room_name must equal SHA-256('featherchat-group:' + name)[:16]"
);
}
#[test]
fn room_acl_open_mode() {
let mgr = wzp_relay::room::RoomManager::new();
// Open mode: everyone is authorized regardless of fingerprint presence
assert!(mgr.is_authorized("any-room", None));
assert!(mgr.is_authorized("any-room", Some("random-fp")));
assert!(mgr.is_authorized("another-room", Some("abc:def")));
}
#[test]
fn room_acl_enforced() {
let mgr = wzp_relay::room::RoomManager::with_acl();
// ACL enabled but no fingerprint provided => denied
assert!(
!mgr.is_authorized("room1", None),
"ACL mode must reject connections without a fingerprint"
);
}
#[test]
fn room_acl_allows_listed() {
let mut mgr = wzp_relay::room::RoomManager::with_acl();
mgr.allow("secure-room", "alice-fp");
mgr.allow("secure-room", "bob-fp");
assert!(mgr.is_authorized("secure-room", Some("alice-fp")));
assert!(mgr.is_authorized("secure-room", Some("bob-fp")));
}
#[test]
fn room_acl_denies_unlisted() {
let mut mgr = wzp_relay::room::RoomManager::with_acl();
mgr.allow("secure-room", "alice-fp");
assert!(
!mgr.is_authorized("secure-room", Some("eve-fp")),
"unlisted fingerprints must be denied"
);
assert!(
!mgr.is_authorized("secure-room", Some("mallory-fp")),
"unlisted fingerprints must be denied"
);
// No fingerprint at all => also denied
assert!(
!mgr.is_authorized("secure-room", None),
"no fingerprint must be denied in ACL mode"
);
}
// ─── Web Bridge Auth + Proto Standalone + S-9 ──────────────────────────────
/// WZP-S-6: featherChat may include `eth_address` in ValidateResponse.
/// WZP's ValidateResponse must handle it gracefully (serde ignores unknown fields).
#[test]
fn auth_response_with_eth_address() {
// FC response with eth_address present (non-null)
let with_eth = serde_json::json!({
"valid": true,
"fingerprint": "a1b2:c3d4:e5f6:7890:abcd:ef01:2345:6789",
"alias": "vitalik",
"eth_address": "0x1234567890abcdef1234567890abcdef12345678"
});
let resp: wzp_relay::auth::ValidateResponse =
serde_json::from_value(with_eth).unwrap();
assert!(resp.valid);
assert_eq!(
resp.fingerprint.unwrap(),
"a1b2:c3d4:e5f6:7890:abcd:ef01:2345:6789"
);
assert_eq!(resp.alias.unwrap(), "vitalik");
// FC response with eth_address = null
let with_null_eth = serde_json::json!({
"valid": true,
"fingerprint": "dead:beef:cafe:babe:1234:5678:9abc:def0",
"alias": "anon",
"eth_address": null
});
let resp2: wzp_relay::auth::ValidateResponse =
serde_json::from_value(with_null_eth).unwrap();
assert!(resp2.valid);
assert_eq!(
resp2.fingerprint.unwrap(),
"dead:beef:cafe:babe:1234:5678:9abc:def0"
);
// FC response without eth_address at all
let without_eth = serde_json::json!({
"valid": false
});
let resp3: wzp_relay::auth::ValidateResponse =
serde_json::from_value(without_eth).unwrap();
assert!(!resp3.valid);
}
/// WZP-S-7: SignalMessage::AuthToken { token } exists and round-trips via serde.
#[test]
fn wzp_proto_has_auth_token_variant() {
let msg = wzp_proto::SignalMessage::AuthToken {
token: "fc-bearer-token-xyz".to_string(),
};
// Serialize to JSON
let json = serde_json::to_string(&msg).unwrap();
assert!(json.contains("AuthToken"));
assert!(json.contains("fc-bearer-token-xyz"));
// Deserialize back
let decoded: wzp_proto::SignalMessage = serde_json::from_str(&json).unwrap();
if let wzp_proto::SignalMessage::AuthToken { token } = decoded {
assert_eq!(token, "fc-bearer-token-xyz");
} else {
panic!("expected AuthToken variant, got: {decoded:?}");
}
}
/// WZP-S-6: WZP CallSignalType has all variants matching featherChat's set.
#[test]
fn all_fc_call_signal_types_representable() {
use wzp_client::featherchat::CallSignalType;
// Verify each FC variant can be constructed and debug-printed
let variants: Vec<(CallSignalType, &str)> = vec![
(CallSignalType::Offer, "Offer"),
(CallSignalType::Answer, "Answer"),
(CallSignalType::IceCandidate, "IceCandidate"),
(CallSignalType::Hangup, "Hangup"),
(CallSignalType::Reject, "Reject"),
(CallSignalType::Ringing, "Ringing"),
(CallSignalType::Busy, "Busy"),
];
assert_eq!(variants.len(), 7, "featherChat defines exactly 7 call signal types");
for (variant, expected_name) in &variants {
let name = format!("{variant:?}");
assert_eq!(&name, expected_name);
// Each variant should serialize/deserialize cleanly
let json = serde_json::to_string(variant).unwrap();
let round_tripped: CallSignalType = serde_json::from_str(&json).unwrap();
assert_eq!(format!("{round_tripped:?}"), *expected_name);
}
}
/// WZP-S-9: hashed room name used as QUIC SNI must be valid — lowercase hex only.
#[test]
fn hash_room_name_used_as_sni_is_valid() {
let long_name = "x".repeat(1000);
let test_rooms = [
"general",
"Voice Room #1",
"café-lounge",
"a]b[c{d}e",
"\u{1f480}\u{1f525}",
long_name.as_str(),
];
for room in &test_rooms {
let hashed = wzp_crypto::hash_room_name(room);
// Must be non-empty
assert!(!hashed.is_empty(), "hash of '{room}' must not be empty");
// Must contain only lowercase hex chars (valid for SNI)
for ch in hashed.chars() {
assert!(
ch.is_ascii_hexdigit() && !ch.is_ascii_uppercase(),
"hash of '{room}' contains invalid SNI char: '{ch}' (full: {hashed})"
);
}
// SHA-256 truncated to 16 bytes -> 32 hex chars
assert_eq!(
hashed.len(),
32,
"hash should be 32 hex chars (16 bytes), got {} for '{room}'",
hashed.len()
);
}
}
/// WZP-S-7: wzp-proto Cargo.toml must be standalone — no `.workspace = true` inheritance.
#[test]
fn wzp_proto_cargo_toml_is_standalone() {
// Try both paths (run from workspace root or from crate directory)
let candidates = [
"crates/wzp-proto/Cargo.toml",
"../wzp-proto/Cargo.toml",
];
let contents = candidates
.iter()
.find_map(|p| std::fs::read_to_string(p).ok())
.expect("could not read crates/wzp-proto/Cargo.toml from any expected path");
// Must NOT contain ".workspace = true" anywhere — that would break standalone use
assert!(
!contents.contains(".workspace = true"),
"wzp-proto Cargo.toml must not use workspace inheritance (.workspace = true), \
found in:\n{contents}"
);
// Sanity: it should still be a valid Cargo.toml with the right package name
assert!(
contents.contains("name = \"wzp-proto\""),
"expected package name 'wzp-proto' in Cargo.toml"
);
}

View File

@@ -1,17 +1,22 @@
[package] [package]
name = "wzp-proto" name = "wzp-proto"
version.workspace = true version = "0.1.0"
edition.workspace = true edition = "2024"
license.workspace = true license = "MIT OR Apache-2.0"
rust-version.workspace = true rust-version = "1.85"
description = "WarzonePhone protocol types, traits, and core logic" description = "WarzonePhone protocol types, traits, and core logic"
# This crate is designed to be importable standalone — no workspace inheritance.
# featherChat and other projects can depend on it directly via git:
# wzp-proto = { git = "ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git", path = "crates/wzp-proto" }
[dependencies] [dependencies]
bytes = { workspace = true } bytes = "1"
thiserror = { workspace = true } thiserror = "2"
async-trait = { workspace = true } async-trait = "0.1"
serde = { workspace = true } serde = { version = "1", features = ["derive"] }
tracing = { workspace = true } tracing = "0.1"
[dev-dependencies] [dev-dependencies]
tokio = { workspace = true } tokio = { version = "1", features = ["full"] }
serde_json = "1"

View File

@@ -16,6 +16,8 @@ pub enum CodecId {
Codec2_3200 = 3, Codec2_3200 = 3,
/// Codec2 at 1200bps (catastrophic conditions) /// Codec2 at 1200bps (catastrophic conditions)
Codec2_1200 = 4, Codec2_1200 = 4,
/// Comfort noise descriptor (silence suppression)
ComfortNoise = 5,
} }
impl CodecId { impl CodecId {
@@ -27,6 +29,7 @@ impl CodecId {
Self::Opus6k => 6_000, Self::Opus6k => 6_000,
Self::Codec2_3200 => 3_200, Self::Codec2_3200 => 3_200,
Self::Codec2_1200 => 1_200, Self::Codec2_1200 => 1_200,
Self::ComfortNoise => 0,
} }
} }
@@ -38,6 +41,7 @@ impl CodecId {
Self::Opus6k => 40, Self::Opus6k => 40,
Self::Codec2_3200 => 20, Self::Codec2_3200 => 20,
Self::Codec2_1200 => 40, Self::Codec2_1200 => 40,
Self::ComfortNoise => 20,
} }
} }
@@ -46,6 +50,7 @@ impl CodecId {
match self { match self {
Self::Opus24k | Self::Opus16k | Self::Opus6k => 48_000, Self::Opus24k | Self::Opus16k | Self::Opus6k => 48_000,
Self::Codec2_3200 | Self::Codec2_1200 => 8_000, Self::Codec2_3200 | Self::Codec2_1200 => 8_000,
Self::ComfortNoise => 48_000,
} }
} }
@@ -57,6 +62,7 @@ impl CodecId {
2 => Some(Self::Opus6k), 2 => Some(Self::Opus6k),
3 => Some(Self::Codec2_3200), 3 => Some(Self::Codec2_3200),
4 => Some(Self::Codec2_1200), 4 => Some(Self::Codec2_1200),
5 => Some(Self::ComfortNoise),
_ => None, _ => None,
} }
} }

View File

@@ -2,6 +2,97 @@ use std::collections::BTreeMap;
use crate::packet::MediaPacket; use crate::packet::MediaPacket;
// ---------------------------------------------------------------------------
// Adaptive playout delay (NetEq-inspired)
// ---------------------------------------------------------------------------
/// Adaptive playout delay estimator based on observed inter-arrival jitter.
///
/// Inspired by WebRTC NetEq and IAX2 adaptive jitter buffering. Tracks an
/// exponential moving average (EMA) of inter-packet arrival jitter and
/// converts it to a target buffer depth in packets.
pub struct AdaptivePlayoutDelay {
/// Current target delay in packets (equivalent to target_depth).
target_delay: usize,
/// Minimum allowed delay.
min_delay: usize,
/// Maximum allowed delay.
max_delay: usize,
/// Exponential moving average of inter-packet arrival jitter (ms).
jitter_ema: f64,
/// EMA smoothing factor (0.0-1.0, lower = smoother).
alpha: f64,
/// Last packet arrival timestamp (for computing inter-arrival jitter).
last_arrival_ms: Option<u64>,
/// Last packet expected timestamp.
last_expected_ms: Option<u64>,
}
/// Frame duration in milliseconds (20ms Opus/Codec2 frames).
const FRAME_DURATION_MS: f64 = 20.0;
/// Safety margin added to jitter-derived target (in packets).
const SAFETY_MARGIN_PACKETS: f64 = 2.0;
/// Default EMA smoothing factor.
const DEFAULT_ALPHA: f64 = 0.05;
impl AdaptivePlayoutDelay {
/// Create a new adaptive playout delay estimator.
///
/// - `min_delay`: minimum target delay in packets
/// - `max_delay`: maximum target delay in packets
pub fn new(min_delay: usize, max_delay: usize) -> Self {
Self {
target_delay: min_delay,
min_delay,
max_delay,
jitter_ema: 0.0,
alpha: DEFAULT_ALPHA,
last_arrival_ms: None,
last_expected_ms: None,
}
}
/// Update with a new packet arrival. Returns the new target delay.
///
/// - `arrival_ms`: when the packet actually arrived (wall clock)
/// - `expected_ms`: when it should have arrived (based on sequence * 20ms)
pub fn update(&mut self, arrival_ms: u64, expected_ms: u64) -> usize {
if let (Some(last_arrival), Some(last_expected)) =
(self.last_arrival_ms, self.last_expected_ms)
{
let actual_delta = arrival_ms as f64 - last_arrival as f64;
let expected_delta = expected_ms as f64 - last_expected as f64;
let jitter = (actual_delta - expected_delta).abs();
// Update EMA
self.jitter_ema = self.alpha * jitter + (1.0 - self.alpha) * self.jitter_ema;
// Convert jitter estimate to target delay in packets
let raw_target = (self.jitter_ema / FRAME_DURATION_MS).ceil() + SAFETY_MARGIN_PACKETS;
self.target_delay =
(raw_target as usize).clamp(self.min_delay, self.max_delay);
}
self.last_arrival_ms = Some(arrival_ms);
self.last_expected_ms = Some(expected_ms);
self.target_delay
}
/// Get current target delay in packets.
pub fn target_delay(&self) -> usize {
self.target_delay
}
/// Get current jitter estimate in ms.
pub fn jitter_estimate_ms(&self) -> f64 {
self.jitter_ema
}
}
// ---------------------------------------------------------------------------
// Jitter buffer
// ---------------------------------------------------------------------------
/// Adaptive jitter buffer that reorders packets by sequence number. /// Adaptive jitter buffer that reorders packets by sequence number.
/// ///
/// Designed for the lossy relay link with up to 5 seconds of buffering depth. /// Designed for the lossy relay link with up to 5 seconds of buffering depth.
@@ -21,6 +112,8 @@ pub struct JitterBuffer {
initialized: bool, initialized: bool,
/// Statistics. /// Statistics.
stats: JitterStats, stats: JitterStats,
/// Optional adaptive playout delay estimator.
adaptive: Option<AdaptivePlayoutDelay>,
} }
/// Jitter buffer statistics. /// Jitter buffer statistics.
@@ -32,6 +125,14 @@ pub struct JitterStats {
pub packets_late: u64, pub packets_late: u64,
pub packets_duplicate: u64, pub packets_duplicate: u64,
pub current_depth: usize, pub current_depth: usize,
/// Total frames decoded by the consumer (tracked externally via `record_decode`).
pub total_decoded: u64,
/// Number of times the consumer tried to decode but the buffer was empty/not-ready.
pub underruns: u64,
/// Number of packets dropped because the buffer exceeded max depth.
pub overruns: u64,
/// High water mark — maximum buffer depth observed.
pub max_depth_seen: usize,
} }
/// Result of attempting to get the next packet for playout. /// Result of attempting to get the next packet for playout.
@@ -60,6 +161,27 @@ impl JitterBuffer {
min_depth, min_depth,
initialized: false, initialized: false,
stats: JitterStats::default(), stats: JitterStats::default(),
adaptive: None,
}
}
/// Create a jitter buffer with adaptive playout delay.
///
/// The target depth will be automatically adjusted based on observed
/// inter-arrival jitter (NetEq-inspired algorithm).
///
/// - `min_delay`: minimum target delay in packets
/// - `max_delay`: maximum target delay in packets (also used as max_depth)
pub fn new_adaptive(min_delay: usize, max_delay: usize) -> Self {
Self {
buffer: BTreeMap::new(),
next_playout_seq: 0,
max_depth: max_delay,
target_depth: min_delay,
min_depth: min_delay,
initialized: false,
stats: JitterStats::default(),
adaptive: Some(AdaptivePlayoutDelay::new(min_delay, max_delay)),
} }
} }
@@ -99,12 +221,35 @@ impl JitterBuffer {
self.next_playout_seq = seq; self.next_playout_seq = seq;
} }
// Update adaptive playout delay if enabled.
// Use the packet's timestamp as expected_ms and compute a simple wall-clock
// proxy from the header timestamp (arrival_ms is approximated as timestamp
// + observed jitter, but since we don't have real wall-clock here we use
// the receive order with the header timestamp as the expected baseline).
if let Some(ref mut adaptive) = self.adaptive {
// expected_ms derived from sequence-implied timing: seq * frame_duration
let expected_ms = packet.header.timestamp as u64;
// For arrival_ms, use the actual receive timestamp. In the absence of
// a wall-clock parameter, we use std::time for a monotonic approximation.
// However, to keep the API simple, we compute arrival from the packet
// stats: the Nth received packet "arrives" at N * frame_duration as a
// baseline, and real network jitter shows in the deviation.
// NOTE: In production, the caller should pass real wall-clock time.
// For now, we use the header timestamp as-is (callers with adaptive
// mode should feed arrival time via push_with_arrival).
let arrival_ms = expected_ms; // no-op for basic push; use push_with_arrival
adaptive.update(arrival_ms, expected_ms);
self.target_depth = adaptive.target_delay();
self.min_depth = self.min_depth.min(self.target_depth);
}
self.buffer.insert(seq, packet); self.buffer.insert(seq, packet);
// Evict oldest if over max depth // Evict oldest if over max depth
while self.buffer.len() > self.max_depth { while self.buffer.len() > self.max_depth {
if let Some((&oldest_seq, _)) = self.buffer.first_key_value() { if let Some((&oldest_seq, _)) = self.buffer.first_key_value() {
self.buffer.remove(&oldest_seq); self.buffer.remove(&oldest_seq);
self.stats.overruns += 1;
// Advance playout seq past evicted packet // Advance playout seq past evicted packet
if seq_before(self.next_playout_seq, oldest_seq.wrapping_add(1)) { if seq_before(self.next_playout_seq, oldest_seq.wrapping_add(1)) {
self.next_playout_seq = oldest_seq.wrapping_add(1); self.next_playout_seq = oldest_seq.wrapping_add(1);
@@ -114,6 +259,9 @@ impl JitterBuffer {
} }
self.stats.current_depth = self.buffer.len(); self.stats.current_depth = self.buffer.len();
if self.stats.current_depth > self.stats.max_depth_seen {
self.stats.max_depth_seen = self.stats.current_depth;
}
} }
/// Get the next packet for playout. /// Get the next packet for playout.
@@ -163,6 +311,86 @@ impl JitterBuffer {
self.stats = JitterStats::default(); self.stats = JitterStats::default();
} }
/// Record that the consumer attempted to decode but the buffer was empty/not-ready.
pub fn record_underrun(&mut self) {
self.stats.underruns += 1;
}
/// Record a successful frame decode by the consumer.
pub fn record_decode(&mut self) {
self.stats.total_decoded += 1;
}
/// Reset statistics counters (preserves buffer contents and playout state).
pub fn reset_stats(&mut self) {
self.stats = JitterStats {
current_depth: self.buffer.len(),
..JitterStats::default()
};
}
/// Push a received packet with an explicit wall-clock arrival time.
///
/// This is the preferred entry point when adaptive playout delay is enabled,
/// since the estimator needs real arrival timestamps.
pub fn push_with_arrival(&mut self, packet: MediaPacket, arrival_ms: u64) {
let expected_ms = packet.header.timestamp as u64;
let seq = packet.header.seq;
self.stats.packets_received += 1;
if !self.initialized {
self.next_playout_seq = seq;
self.initialized = true;
}
// Check for duplicates
if self.buffer.contains_key(&seq) {
self.stats.packets_duplicate += 1;
return;
}
// Check if packet is too old (already played out)
if self.stats.packets_played > 0 && seq_before(seq, self.next_playout_seq) {
self.stats.packets_late += 1;
return;
}
// If we haven't started playout yet, adjust next_playout_seq to earliest known
if self.stats.packets_played == 0 && seq_before(seq, self.next_playout_seq) {
self.next_playout_seq = seq;
}
// Update adaptive playout delay if enabled.
if let Some(ref mut adaptive) = self.adaptive {
adaptive.update(arrival_ms, expected_ms);
self.target_depth = adaptive.target_delay();
}
self.buffer.insert(seq, packet);
// Evict oldest if over max depth
while self.buffer.len() > self.max_depth {
if let Some((&oldest_seq, _)) = self.buffer.first_key_value() {
self.buffer.remove(&oldest_seq);
self.stats.overruns += 1;
if seq_before(self.next_playout_seq, oldest_seq.wrapping_add(1)) {
self.next_playout_seq = oldest_seq.wrapping_add(1);
self.stats.packets_lost += 1;
}
}
}
self.stats.current_depth = self.buffer.len();
if self.stats.current_depth > self.stats.max_depth_seen {
self.stats.max_depth_seen = self.stats.current_depth;
}
}
/// Get a reference to the adaptive playout delay estimator, if enabled.
pub fn adaptive_delay(&self) -> Option<&AdaptivePlayoutDelay> {
self.adaptive.as_ref()
}
/// Adjust target depth based on observed jitter. /// Adjust target depth based on observed jitter.
pub fn set_target_depth(&mut self, depth: usize) { pub fn set_target_depth(&mut self, depth: usize) {
self.target_depth = depth.min(self.max_depth); self.target_depth = depth.min(self.max_depth);
@@ -304,4 +532,192 @@ mod tests {
other => panic!("expected packet 0, got {:?}", other), other => panic!("expected packet 0, got {:?}", other),
} }
} }
// ---------------------------------------------------------------
// AdaptivePlayoutDelay tests
// ---------------------------------------------------------------
#[test]
fn adaptive_delay_stable() {
// Feed packets with consistent 20ms spacing — target should stay at minimum.
let mut apd = AdaptivePlayoutDelay::new(3, 50);
for i in 0u64..200 {
let arrival_ms = i * 20;
let expected_ms = i * 20;
apd.update(arrival_ms, expected_ms);
}
// With zero jitter, target should be min_delay (ceil(0/20) + 2 = 2,
// clamped to min_delay=3).
assert_eq!(apd.target_delay(), 3);
assert!(
apd.jitter_estimate_ms() < 1.0,
"jitter estimate should be near zero, got {}",
apd.jitter_estimate_ms()
);
}
#[test]
fn adaptive_delay_increases_on_jitter() {
// Feed packets with variable spacing (±10ms jitter).
let mut apd = AdaptivePlayoutDelay::new(3, 50);
// Alternate: arrive 10ms early / 10ms late
for i in 0u64..200 {
let expected_ms = i * 20;
let jitter_offset: i64 = if i % 2 == 0 { 10 } else { -10 };
let arrival_ms = (expected_ms as i64 + jitter_offset).max(0) as u64;
apd.update(arrival_ms, expected_ms);
}
// Inter-arrival jitter should be ~20ms (swing of 10 to -10 = delta 20).
// target = ceil(~20/20) + 2 = 3, but EMA converges near 20 so target >= 3.
assert!(
apd.target_delay() >= 3,
"target should increase with jitter, got {}",
apd.target_delay()
);
assert!(
apd.jitter_estimate_ms() > 5.0,
"jitter estimate should be significant, got {}",
apd.jitter_estimate_ms()
);
}
#[test]
fn adaptive_delay_decreases_on_recovery() {
let mut apd = AdaptivePlayoutDelay::new(3, 50);
// Phase 1: high jitter (±30ms)
for i in 0u64..200 {
let expected_ms = i * 20;
let offset: i64 = if i % 2 == 0 { 30 } else { -30 };
let arrival_ms = (expected_ms as i64 + offset).max(0) as u64;
apd.update(arrival_ms, expected_ms);
}
let high_target = apd.target_delay();
let high_jitter = apd.jitter_estimate_ms();
// Phase 2: stable (no jitter) — target should decrease via EMA decay
for i in 200u64..600 {
let t = i * 20;
apd.update(t, t);
}
let low_target = apd.target_delay();
let low_jitter = apd.jitter_estimate_ms();
assert!(
low_target <= high_target,
"target should decrease after recovery: {} -> {}",
high_target,
low_target
);
assert!(
low_jitter < high_jitter,
"jitter estimate should decrease: {} -> {}",
high_jitter,
low_jitter
);
}
#[test]
fn adaptive_delay_clamped() {
let mut apd = AdaptivePlayoutDelay::new(3, 10);
// Extreme jitter: packets arrive with huge variance
for i in 0u64..500 {
let expected_ms = i * 20;
let offset: i64 = if i % 2 == 0 { 500 } else { -500 };
let arrival_ms = (expected_ms as i64 + offset).max(0) as u64;
apd.update(arrival_ms, expected_ms);
}
assert!(
apd.target_delay() <= 10,
"target should not exceed max_delay=10, got {}",
apd.target_delay()
);
assert!(
apd.target_delay() >= 3,
"target should not go below min_delay=3, got {}",
apd.target_delay()
);
}
#[test]
fn adaptive_jitter_estimate() {
let mut apd = AdaptivePlayoutDelay::new(3, 50);
// Initial jitter estimate should be zero
assert_eq!(apd.jitter_estimate_ms(), 0.0);
// After one packet, still zero (no delta yet)
apd.update(0, 0);
assert_eq!(apd.jitter_estimate_ms(), 0.0);
// Second packet with 5ms jitter
apd.update(25, 20); // arrived 5ms late
assert!(
apd.jitter_estimate_ms() > 0.0,
"jitter estimate should be positive after jittery packet"
);
assert!(
apd.jitter_estimate_ms() <= 5.0,
"first jitter sample of 5ms with alpha=0.05 should not exceed 5ms, got {}",
apd.jitter_estimate_ms()
);
// Feed many packets with ~15ms jitter — EMA should converge
for i in 2u64..500 {
let expected_ms = i * 20;
let arrival_ms = expected_ms + 15; // consistently 15ms late
apd.update(arrival_ms, expected_ms);
}
// Steady-state: inter-arrival jitter = |35 - 20| = 0 actually,
// because if every packet is 15ms late, delta_actual = 35-35 = 20,
// same as expected. So jitter should converge toward 0.
// Let's use variable jitter instead for a better test.
let mut apd2 = AdaptivePlayoutDelay::new(3, 50);
for i in 0u64..500 {
let expected_ms = i * 20;
// Alternate 0ms and 15ms late
let extra = if i % 2 == 0 { 0 } else { 15 };
let arrival_ms = expected_ms + extra;
apd2.update(arrival_ms, expected_ms);
}
let est = apd2.jitter_estimate_ms();
assert!(
est > 5.0 && est < 20.0,
"jitter estimate should converge near 15ms with alternating 0/15ms offsets, got {}",
est
);
}
// ---------------------------------------------------------------
// JitterBuffer with adaptive mode tests
// ---------------------------------------------------------------
#[test]
fn jitter_buffer_adaptive_constructor() {
let jb = JitterBuffer::new_adaptive(5, 250);
assert!(jb.adaptive_delay().is_some());
assert_eq!(jb.adaptive_delay().unwrap().target_delay(), 5);
}
#[test]
fn jitter_buffer_adaptive_push_with_arrival() {
let mut jb = JitterBuffer::new_adaptive(3, 50);
// Push packets with consistent timing
for i in 0u16..20 {
let pkt = make_packet(i);
let arrival_ms = i as u64 * 20;
jb.push_with_arrival(pkt, arrival_ms);
}
// With zero jitter, target should stay at min
let ad = jb.adaptive_delay().unwrap();
assert_eq!(ad.target_delay(), 3);
}
} }

View File

@@ -23,7 +23,10 @@ pub mod traits;
// Re-export key types at crate root for convenience. // Re-export key types at crate root for convenience.
pub use codec_id::{CodecId, QualityProfile}; pub use codec_id::{CodecId, QualityProfile};
pub use error::*; pub use error::*;
pub use packet::{HangupReason, MediaHeader, MediaPacket, QualityReport, SignalMessage}; pub use packet::{
HangupReason, MediaHeader, MediaPacket, MiniFrameContext, MiniHeader, QualityReport,
SignalMessage, TrunkEntry, TrunkFrame, FRAME_TYPE_FULL, FRAME_TYPE_MINI,
};
pub use quality::{AdaptiveQualityController, Tier}; pub use quality::{AdaptiveQualityController, Tier};
pub use session::{Session, SessionEvent, SessionState}; pub use session::{Session, SessionEvent, SessionState};
pub use traits::*; pub use traits::*;

View File

@@ -191,6 +191,9 @@ pub struct MediaPacket {
pub quality_report: Option<QualityReport>, pub quality_report: Option<QualityReport>,
} }
/// Maximum number of mini-frames between full headers (1 second at 50 fps).
pub const MINI_FRAME_FULL_INTERVAL: u32 = 50;
impl MediaPacket { impl MediaPacket {
/// Serialize the entire packet to bytes. /// Serialize the entire packet to bytes.
pub fn to_bytes(&self) -> Bytes { pub fn to_bytes(&self) -> Bytes {
@@ -239,6 +242,276 @@ impl MediaPacket {
quality_report, quality_report,
}) })
} }
/// Serialize with mini-frame compression.
///
/// Uses the `MiniFrameContext` to decide whether to emit a compact 4-byte
/// mini-header or a full 12-byte header. A full header is forced on the
/// first frame and every `MINI_FRAME_FULL_INTERVAL` frames thereafter.
pub fn encode_compact(
&self,
ctx: &mut MiniFrameContext,
frames_since_full: &mut u32,
) -> Bytes {
if *frames_since_full > 0 && *frames_since_full < MINI_FRAME_FULL_INTERVAL {
// --- mini frame ---
let ts_delta = self
.header
.timestamp
.wrapping_sub(ctx.last_header.unwrap().timestamp)
as u16;
let mini = MiniHeader {
timestamp_delta_ms: ts_delta,
payload_len: self.payload.len() as u16,
};
let total = 1 + MiniHeader::WIRE_SIZE + self.payload.len();
let mut buf = BytesMut::with_capacity(total);
buf.put_u8(FRAME_TYPE_MINI);
mini.write_to(&mut buf);
buf.put(self.payload.clone());
// Advance the context so the next mini-frame delta is relative
// to this frame, mirroring what expand() does on the decoder side.
ctx.update(&self.header);
*frames_since_full += 1;
buf.freeze()
} else {
// --- full frame ---
let qr_size = if self.quality_report.is_some() {
QualityReport::WIRE_SIZE
} else {
0
};
let total = 1 + MediaHeader::WIRE_SIZE + self.payload.len() + qr_size;
let mut buf = BytesMut::with_capacity(total);
buf.put_u8(FRAME_TYPE_FULL);
self.header.write_to(&mut buf);
buf.put(self.payload.clone());
if let Some(ref qr) = self.quality_report {
qr.write_to(&mut buf);
}
ctx.update(&self.header);
*frames_since_full = 1; // next frame will be the 1st after full
buf.freeze()
}
}
/// Decode from compact wire format (auto-detects full vs mini).
///
/// Returns `None` on malformed input or if a mini-frame arrives before any
/// full header baseline has been established.
pub fn decode_compact(buf: &[u8], ctx: &mut MiniFrameContext) -> Option<Self> {
if buf.is_empty() {
return None;
}
let frame_type = buf[0];
let rest = &buf[1..];
match frame_type {
FRAME_TYPE_FULL => {
let pkt = Self::from_bytes(Bytes::copy_from_slice(rest))?;
ctx.update(&pkt.header);
Some(pkt)
}
FRAME_TYPE_MINI => {
if rest.len() < MiniHeader::WIRE_SIZE {
return None;
}
let mut cursor = rest;
let mini = MiniHeader::read_from(&mut cursor)?;
let payload_start = 1 + MiniHeader::WIRE_SIZE;
let payload_end = payload_start + mini.payload_len as usize;
if buf.len() < payload_end {
return None;
}
let payload = Bytes::copy_from_slice(&buf[payload_start..payload_end]);
let header = ctx.expand(&mini)?;
Some(Self {
header,
payload,
quality_report: None,
})
}
_ => None,
}
}
}
// ---------------------------------------------------------------------------
// Trunking — multiplex multiple session packets into one QUIC datagram
// ---------------------------------------------------------------------------
/// A single entry inside a [`TrunkFrame`].
#[derive(Clone, Debug)]
pub struct TrunkEntry {
/// 2-byte session identifier (up to 65 536 sessions).
pub session_id: [u8; 2],
/// Encoded MediaPacket payload (already compressed).
pub payload: Bytes,
}
impl TrunkEntry {
/// Per-entry wire overhead: 2 (session_id) + 2 (len).
pub const OVERHEAD: usize = 4;
}
/// A trunked frame carrying multiple session packets in one datagram.
///
/// Wire format:
/// ```text
/// [count:u16] [entry1] [entry2] ...
/// ```
/// Each entry:
/// ```text
/// [session_id:2] [len:u16] [payload:len]
/// ```
#[derive(Clone, Debug)]
pub struct TrunkFrame {
pub packets: Vec<TrunkEntry>,
}
impl TrunkFrame {
/// Create an empty trunk frame.
pub fn new() -> Self {
Self {
packets: Vec::new(),
}
}
/// Append a session packet to the frame.
pub fn push(&mut self, session_id: [u8; 2], payload: Bytes) {
self.packets.push(TrunkEntry {
session_id,
payload,
});
}
/// Number of entries in the frame.
pub fn len(&self) -> usize {
self.packets.len()
}
/// Whether the frame is empty.
pub fn is_empty(&self) -> bool {
self.packets.is_empty()
}
/// Total wire size of the encoded frame.
pub fn wire_size(&self) -> usize {
// 2 bytes for count + each entry
2 + self
.packets
.iter()
.map(|e| TrunkEntry::OVERHEAD + e.payload.len())
.sum::<usize>()
}
/// Encode to wire bytes.
pub fn encode(&self) -> Bytes {
let mut buf = BytesMut::with_capacity(self.wire_size());
buf.put_u16(self.packets.len() as u16);
for entry in &self.packets {
buf.put_slice(&entry.session_id);
buf.put_u16(entry.payload.len() as u16);
buf.put(entry.payload.clone());
}
buf.freeze()
}
/// Decode from wire bytes. Returns `None` on malformed input.
pub fn decode(buf: &[u8]) -> Option<Self> {
if buf.len() < 2 {
return None;
}
let mut cursor = &buf[..];
let count = cursor.get_u16() as usize;
let mut packets = Vec::with_capacity(count);
for _ in 0..count {
if cursor.remaining() < TrunkEntry::OVERHEAD {
return None;
}
let mut session_id = [0u8; 2];
session_id[0] = cursor.get_u8();
session_id[1] = cursor.get_u8();
let len = cursor.get_u16() as usize;
if cursor.remaining() < len {
return None;
}
let payload = Bytes::copy_from_slice(&cursor[..len]);
cursor.advance(len);
packets.push(TrunkEntry {
session_id,
payload,
});
}
Some(Self { packets })
}
}
// ---------------------------------------------------------------------------
// Mini-frames — compact header for steady-state media packets
// ---------------------------------------------------------------------------
/// Frame type tag: full MediaHeader follows.
pub const FRAME_TYPE_FULL: u8 = 0x00;
/// Frame type tag: MiniHeader follows (requires prior baseline).
pub const FRAME_TYPE_MINI: u8 = 0x01;
/// Compact 4-byte header used after a full MediaHeader baseline has been
/// established. Only the timestamp delta and payload length are transmitted;
/// all other fields are inherited from the last full header.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct MiniHeader {
/// Milliseconds elapsed since the last header's timestamp.
pub timestamp_delta_ms: u16,
/// Length of the payload that follows this header.
pub payload_len: u16,
}
impl MiniHeader {
/// Header size in bytes on the wire.
pub const WIRE_SIZE: usize = 4;
/// Serialize to a 4-byte buffer.
pub fn write_to(&self, buf: &mut impl BufMut) {
buf.put_u16(self.timestamp_delta_ms);
buf.put_u16(self.payload_len);
}
/// Deserialize from a buffer. Returns `None` if insufficient data.
pub fn read_from(buf: &mut impl Buf) -> Option<Self> {
if buf.remaining() < Self::WIRE_SIZE {
return None;
}
Some(Self {
timestamp_delta_ms: buf.get_u16(),
payload_len: buf.get_u16(),
})
}
}
/// Stateful context that expands [`MiniHeader`]s back into full
/// [`MediaHeader`]s by tracking the last baseline header.
#[derive(Clone, Debug, Default)]
pub struct MiniFrameContext {
last_header: Option<MediaHeader>,
}
impl MiniFrameContext {
/// Record a full header as the new baseline for subsequent mini-frames.
pub fn update(&mut self, header: &MediaHeader) {
self.last_header = Some(*header);
}
/// Expand a mini-header into a full [`MediaHeader`] using the stored
/// baseline. Returns `None` if no baseline has been set yet.
pub fn expand(&mut self, mini: &MiniHeader) -> Option<MediaHeader> {
let base = self.last_header.as_ref()?;
let mut expanded = *base;
expanded.seq = base.seq.wrapping_add(1);
expanded.timestamp = base.timestamp.wrapping_add(mini.timestamp_delta_ms as u32);
self.last_header = Some(expanded);
Some(expanded)
}
} }
/// Signaling messages sent over the reliable QUIC stream. /// Signaling messages sent over the reliable QUIC stream.
@@ -297,6 +570,27 @@ pub enum SignalMessage {
/// End the call. /// End the call.
Hangup { reason: HangupReason }, Hangup { reason: HangupReason },
/// featherChat bearer token for relay authentication.
/// Sent as the first signal message when --auth-url is configured.
AuthToken { token: String },
/// Put the call on hold (stop sending media, keep session alive).
Hold,
/// Resume a held call.
Unhold,
/// Mute request from the remote side (server-initiated mute, like IAX2 QUELCH).
Mute,
/// Unmute request from the remote side (like IAX2 UNQUELCH).
Unmute,
/// Transfer the call to another peer.
Transfer {
target_fingerprint: String,
/// Optional relay address for the transfer target.
relay_addr: Option<String>,
},
/// Acknowledge a transfer request.
TransferAck,
} }
/// Reasons for ending a call. /// Reasons for ending a call.
@@ -410,6 +704,78 @@ mod tests {
assert_eq!(packet.quality_report, decoded.quality_report); assert_eq!(packet.quality_report, decoded.quality_report);
} }
#[test]
fn hold_unhold_serialize() {
let hold = SignalMessage::Hold;
let json = serde_json::to_string(&hold).unwrap();
let decoded: SignalMessage = serde_json::from_str(&json).unwrap();
assert!(matches!(decoded, SignalMessage::Hold));
let unhold = SignalMessage::Unhold;
let json = serde_json::to_string(&unhold).unwrap();
let decoded: SignalMessage = serde_json::from_str(&json).unwrap();
assert!(matches!(decoded, SignalMessage::Unhold));
}
#[test]
fn mute_unmute_serialize() {
let mute = SignalMessage::Mute;
let json = serde_json::to_string(&mute).unwrap();
let decoded: SignalMessage = serde_json::from_str(&json).unwrap();
assert!(matches!(decoded, SignalMessage::Mute));
let unmute = SignalMessage::Unmute;
let json = serde_json::to_string(&unmute).unwrap();
let decoded: SignalMessage = serde_json::from_str(&json).unwrap();
assert!(matches!(decoded, SignalMessage::Unmute));
}
#[test]
fn transfer_serialize() {
let transfer = SignalMessage::Transfer {
target_fingerprint: "abc123".to_string(),
relay_addr: Some("relay.example.com:4433".to_string()),
};
let json = serde_json::to_string(&transfer).unwrap();
let decoded: SignalMessage = serde_json::from_str(&json).unwrap();
match decoded {
SignalMessage::Transfer {
target_fingerprint,
relay_addr,
} => {
assert_eq!(target_fingerprint, "abc123");
assert_eq!(relay_addr.unwrap(), "relay.example.com:4433");
}
_ => panic!("expected Transfer variant"),
}
// Also test with relay_addr = None
let transfer_no_relay = SignalMessage::Transfer {
target_fingerprint: "def456".to_string(),
relay_addr: None,
};
let json = serde_json::to_string(&transfer_no_relay).unwrap();
let decoded: SignalMessage = serde_json::from_str(&json).unwrap();
match decoded {
SignalMessage::Transfer {
target_fingerprint,
relay_addr,
} => {
assert_eq!(target_fingerprint, "def456");
assert!(relay_addr.is_none());
}
_ => panic!("expected Transfer variant"),
}
}
#[test]
fn transfer_ack_serialize() {
let ack = SignalMessage::TransferAck;
let json = serde_json::to_string(&ack).unwrap();
let decoded: SignalMessage = serde_json::from_str(&json).unwrap();
assert!(matches!(decoded, SignalMessage::TransferAck));
}
#[test] #[test]
fn fec_ratio_encode_decode() { fn fec_ratio_encode_decode() {
let ratio = 0.5; let ratio = 0.5;
@@ -421,4 +787,247 @@ mod tests {
let encoded_max = MediaHeader::encode_fec_ratio(ratio_max); let encoded_max = MediaHeader::encode_fec_ratio(ratio_max);
assert_eq!(encoded_max, 127); assert_eq!(encoded_max, 127);
} }
// ---------------------------------------------------------------
// TrunkFrame tests
// ---------------------------------------------------------------
#[test]
fn trunk_frame_encode_decode() {
let mut frame = TrunkFrame::new();
frame.push([0, 1], Bytes::from_static(b"hello"));
frame.push([0, 2], Bytes::from_static(b"world!"));
frame.push([1, 0], Bytes::from_static(b"x"));
assert_eq!(frame.len(), 3);
let encoded = frame.encode();
let decoded = TrunkFrame::decode(&encoded).expect("decode failed");
assert_eq!(decoded.len(), 3);
assert_eq!(decoded.packets[0].session_id, [0, 1]);
assert_eq!(decoded.packets[0].payload, Bytes::from_static(b"hello"));
assert_eq!(decoded.packets[1].session_id, [0, 2]);
assert_eq!(decoded.packets[1].payload, Bytes::from_static(b"world!"));
assert_eq!(decoded.packets[2].session_id, [1, 0]);
assert_eq!(decoded.packets[2].payload, Bytes::from_static(b"x"));
}
#[test]
fn trunk_frame_empty() {
let frame = TrunkFrame::new();
assert!(frame.is_empty());
assert_eq!(frame.len(), 0);
let encoded = frame.encode();
// Just the 2-byte count header with value 0.
assert_eq!(encoded.len(), 2);
assert_eq!(&encoded[..], &[0, 0]);
let decoded = TrunkFrame::decode(&encoded).unwrap();
assert!(decoded.is_empty());
}
#[test]
fn trunk_entry_wire_size() {
// Each entry overhead must be exactly 4 bytes (2 session_id + 2 len).
assert_eq!(TrunkEntry::OVERHEAD, 4);
// Verify empirically: one entry with a 10-byte payload should produce
// 2 (count) + 4 (overhead) + 10 (payload) = 16 bytes total.
let mut frame = TrunkFrame::new();
frame.push([0xAB, 0xCD], Bytes::from(vec![0u8; 10]));
let encoded = frame.encode();
assert_eq!(encoded.len(), 2 + 4 + 10);
}
// ---------------------------------------------------------------
// MiniHeader / MiniFrameContext tests
// ---------------------------------------------------------------
#[test]
fn mini_header_encode_decode() {
let mini = MiniHeader {
timestamp_delta_ms: 20,
payload_len: 160,
};
let mut buf = BytesMut::new();
mini.write_to(&mut buf);
let mut cursor = &buf[..];
let decoded = MiniHeader::read_from(&mut cursor).unwrap();
assert_eq!(mini, decoded);
}
#[test]
fn mini_header_wire_size() {
let mini = MiniHeader {
timestamp_delta_ms: 0xFFFF,
payload_len: 0xFFFF,
};
let mut buf = BytesMut::new();
mini.write_to(&mut buf);
assert_eq!(buf.len(), 4);
assert_eq!(MiniHeader::WIRE_SIZE, 4);
}
#[test]
fn mini_frame_context_expand() {
let baseline = MediaHeader {
version: 0,
is_repair: false,
codec_id: CodecId::Opus24k,
has_quality_report: false,
fec_ratio_encoded: 10,
seq: 100,
timestamp: 1000,
fec_block: 5,
fec_symbol: 0,
reserved: 0,
csrc_count: 0,
};
let mut ctx = MiniFrameContext::default();
ctx.update(&baseline);
// First expansion
let mini1 = MiniHeader {
timestamp_delta_ms: 20,
payload_len: 80,
};
let h1 = ctx.expand(&mini1).unwrap();
assert_eq!(h1.seq, 101);
assert_eq!(h1.timestamp, 1020);
assert_eq!(h1.codec_id, CodecId::Opus24k);
assert_eq!(h1.fec_block, 5);
// Second expansion — builds on expanded h1
let mini2 = MiniHeader {
timestamp_delta_ms: 20,
payload_len: 80,
};
let h2 = ctx.expand(&mini2).unwrap();
assert_eq!(h2.seq, 102);
assert_eq!(h2.timestamp, 1040);
}
#[test]
fn mini_frame_context_no_baseline() {
let mut ctx = MiniFrameContext::default();
let mini = MiniHeader {
timestamp_delta_ms: 20,
payload_len: 80,
};
assert!(ctx.expand(&mini).is_none());
}
#[test]
fn full_vs_mini_size_comparison() {
// Full frame on wire: 1 byte type tag + 12 byte MediaHeader = 13
let full_size = 1 + MediaHeader::WIRE_SIZE;
assert_eq!(full_size, 13);
// Mini frame on wire: 1 byte type tag + 4 byte MiniHeader = 5
let mini_size = 1 + MiniHeader::WIRE_SIZE;
assert_eq!(mini_size, 5);
// Verify the constants match expectations
assert_eq!(FRAME_TYPE_FULL, 0x00);
assert_eq!(FRAME_TYPE_MINI, 0x01);
}
// ---------------------------------------------------------------
// encode_compact / decode_compact tests
// ---------------------------------------------------------------
fn make_media_packet(seq: u16, ts: u32, payload: &[u8]) -> MediaPacket {
MediaPacket {
header: MediaHeader {
version: 0,
is_repair: false,
codec_id: CodecId::Opus24k,
has_quality_report: false,
fec_ratio_encoded: 10,
seq,
timestamp: ts,
fec_block: 0,
fec_symbol: 0,
reserved: 0,
csrc_count: 0,
},
payload: Bytes::from(payload.to_vec()),
quality_report: None,
}
}
#[test]
fn mini_frame_encode_decode_sequence() {
let mut enc_ctx = MiniFrameContext::default();
let mut dec_ctx = MiniFrameContext::default();
let mut frames_since_full: u32 = 0;
let packets: Vec<MediaPacket> = (0..5)
.map(|i| make_media_packet(i, i as u32 * 20, b"audio"))
.collect();
for (i, pkt) in packets.iter().enumerate() {
let wire = pkt.encode_compact(&mut enc_ctx, &mut frames_since_full);
if i == 0 {
// First frame must be full
assert_eq!(wire[0], FRAME_TYPE_FULL, "frame 0 should be FULL");
} else {
// Subsequent frames should be mini
assert_eq!(wire[0], FRAME_TYPE_MINI, "frame {i} should be MINI");
// Mini wire: 1 (tag) + 4 (mini header) + payload
assert_eq!(wire.len(), 1 + MiniHeader::WIRE_SIZE + pkt.payload.len());
}
let decoded = MediaPacket::decode_compact(&wire, &mut dec_ctx)
.unwrap_or_else(|| panic!("decode failed at frame {i}"));
assert_eq!(decoded.header.seq, pkt.header.seq);
assert_eq!(decoded.header.timestamp, pkt.header.timestamp);
assert_eq!(decoded.payload, pkt.payload);
}
}
#[test]
fn mini_frame_periodic_full() {
let mut ctx = MiniFrameContext::default();
let mut frames_since_full: u32 = 0;
// Encode MINI_FRAME_FULL_INTERVAL + 1 frames. Frame 0 and frame 50
// should be FULL, everything in between should be MINI.
for i in 0..=MINI_FRAME_FULL_INTERVAL {
let pkt = make_media_packet(i as u16, i * 20, b"data");
let wire = pkt.encode_compact(&mut ctx, &mut frames_since_full);
if i == 0 || i == MINI_FRAME_FULL_INTERVAL {
assert_eq!(
wire[0], FRAME_TYPE_FULL,
"frame {i} should be FULL"
);
} else {
assert_eq!(
wire[0], FRAME_TYPE_MINI,
"frame {i} should be MINI"
);
}
}
}
#[test]
fn mini_frame_disabled() {
// Simulate disabled mini-frames by always keeping frames_since_full at 0
// (which is what the encoder does when the feature is off).
let mut ctx = MiniFrameContext::default();
for i in 0..10u16 {
let pkt = make_media_packet(i, i as u32 * 20, b"payload");
// When mini-frames are disabled, the encoder always passes
// frames_since_full = 0 equivalent by never using encode_compact.
// We test the raw path: frames_since_full forced to 0 every time.
let mut frames_since_full: u32 = 0;
let wire = pkt.encode_compact(&mut ctx, &mut frames_since_full);
assert_eq!(wire[0], FRAME_TYPE_FULL, "frame {i} should be FULL when disabled");
}
}
} }

View File

@@ -20,11 +20,18 @@ bytes = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
toml = "0.8" toml = "0.8"
anyhow = "1" anyhow = "1"
reqwest = { version = "0.12", features = ["json"] }
serde_json = "1"
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] } rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
quinn = { workspace = true } quinn = { workspace = true }
prometheus = "0.13"
axum = { version = "0.7", default-features = false, features = ["tokio", "http1"] }
[[bin]] [[bin]]
name = "wzp-relay" name = "wzp-relay"
path = "src/main.rs" path = "src/main.rs"
[dev-dependencies] [dev-dependencies]
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
wzp-transport = { workspace = true }
wzp-client = { workspace = true }

View File

@@ -0,0 +1,106 @@
//! featherChat token authentication.
//!
//! When `--auth-url` is configured, the relay validates bearer tokens
//! against featherChat's `POST /v1/auth/validate` endpoint before
//! allowing clients to join rooms.
use serde::{Deserialize, Serialize};
use tracing::{info, warn};
/// Request body for featherChat token validation.
#[derive(Serialize)]
struct ValidateRequest {
token: String,
}
/// Response from featherChat token validation.
#[derive(Deserialize, Debug)]
pub struct ValidateResponse {
pub valid: bool,
pub fingerprint: Option<String>,
pub alias: Option<String>,
}
/// Validated client identity.
#[derive(Clone, Debug)]
pub struct AuthenticatedClient {
pub fingerprint: String,
pub alias: Option<String>,
}
/// Validate a bearer token against featherChat's auth endpoint.
///
/// Calls `POST {auth_url}` with `{ "token": "..." }`.
/// Returns the client identity if valid, or an error string.
pub async fn validate_token(
auth_url: &str,
token: &str,
) -> Result<AuthenticatedClient, String> {
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(5))
.build()
.map_err(|e| format!("http client error: {e}"))?;
let resp = client
.post(auth_url)
.json(&ValidateRequest {
token: token.to_string(),
})
.send()
.await
.map_err(|e| format!("auth request failed: {e}"))?;
if !resp.status().is_success() {
return Err(format!("auth endpoint returned {}", resp.status()));
}
let body: ValidateResponse = resp
.json()
.await
.map_err(|e| format!("invalid auth response: {e}"))?;
if body.valid {
let fingerprint = body
.fingerprint
.ok_or_else(|| "valid response missing fingerprint".to_string())?;
info!(%fingerprint, alias = ?body.alias, "token validated");
Ok(AuthenticatedClient {
fingerprint,
alias: body.alias,
})
} else {
warn!("token validation failed");
Err("invalid token".to_string())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn validate_request_serializes() {
let req = ValidateRequest {
token: "abc123".to_string(),
};
let json = serde_json::to_string(&req).unwrap();
assert!(json.contains("abc123"));
}
#[test]
fn validate_response_deserializes() {
let json = r#"{"valid": true, "fingerprint": "abcd1234", "alias": "manwe"}"#;
let resp: ValidateResponse = serde_json::from_str(json).unwrap();
assert!(resp.valid);
assert_eq!(resp.fingerprint.unwrap(), "abcd1234");
assert_eq!(resp.alias.unwrap(), "manwe");
}
#[test]
fn invalid_response_deserializes() {
let json = r#"{"valid": false}"#;
let resp: ValidateResponse = serde_json::from_str(json).unwrap();
assert!(!resp.valid);
assert!(resp.fingerprint.is_none());
}
}

View File

@@ -19,6 +19,26 @@ pub struct RelayConfig {
pub jitter_max_depth: usize, pub jitter_max_depth: usize,
/// Logging level (trace, debug, info, warn, error). /// Logging level (trace, debug, info, warn, error).
pub log_level: String, pub log_level: String,
/// featherChat auth validation URL (e.g., "https://chat.example.com/v1/auth/validate").
/// If set, clients must present a valid token before joining rooms.
pub auth_url: Option<String>,
/// Port for the Prometheus metrics HTTP endpoint (e.g., 9090).
/// If None, the metrics endpoint is disabled.
pub metrics_port: Option<u16>,
/// Peer relay addresses to probe for health monitoring.
/// Each target gets a persistent QUIC connection sending 1 Ping/s.
#[serde(default)]
pub probe_targets: Vec<SocketAddr>,
/// Enable mesh mode: each relay probes all configured targets concurrently.
/// Discovery is manual via multiple --probe flags; this flag signals intent.
#[serde(default)]
pub probe_mesh: bool,
/// Enable trunk batching for outgoing media in room mode.
/// When true, packets destined for the same receiver are accumulated into
/// [`TrunkFrame`]s and flushed every 5 ms (or when the batcher is full),
/// reducing per-packet QUIC datagram overhead.
#[serde(default)]
pub trunking_enabled: bool,
} }
impl Default for RelayConfig { impl Default for RelayConfig {
@@ -30,6 +50,11 @@ impl Default for RelayConfig {
jitter_target_depth: 50, jitter_target_depth: 50,
jitter_max_depth: 250, jitter_max_depth: 250,
log_level: "info".to_string(), log_level: "info".to_string(),
auth_url: None,
metrics_port: None,
probe_targets: Vec::new(),
probe_mesh: false,
trunking_enabled: false,
} }
} }
} }

View File

@@ -7,13 +7,18 @@
//! It operates on FEC-protected packets, managing loss recovery and adaptive //! It operates on FEC-protected packets, managing loss recovery and adaptive
//! quality transitions. //! quality transitions.
pub mod auth;
pub mod config; pub mod config;
pub mod handshake; pub mod handshake;
pub mod metrics;
pub mod pipeline; pub mod pipeline;
pub mod probe;
pub mod room; pub mod room;
pub mod session_mgr; pub mod session_mgr;
pub mod trunk;
pub use config::RelayConfig; pub use config::RelayConfig;
pub use handshake::accept_handshake; pub use handshake::accept_handshake;
pub use pipeline::{PipelineConfig, PipelineStats, RelayPipeline}; pub use pipeline::{PipelineConfig, PipelineStats, RelayPipeline};
pub use session_mgr::{RelaySession, SessionId, SessionManager}; pub use session_mgr::{RelaySession, SessionId, SessionInfo, SessionManager, SessionState};
pub use trunk::TrunkBatcher;

View File

@@ -17,8 +17,10 @@ use tracing::{error, info};
use wzp_proto::MediaTransport; use wzp_proto::MediaTransport;
use wzp_relay::config::RelayConfig; use wzp_relay::config::RelayConfig;
use wzp_relay::metrics::RelayMetrics;
use wzp_relay::pipeline::{PipelineConfig, RelayPipeline}; use wzp_relay::pipeline::{PipelineConfig, RelayPipeline};
use wzp_relay::room::{self, RoomManager}; use wzp_relay::room::{self, RoomManager};
use wzp_relay::session_mgr::SessionManager;
fn parse_args() -> RelayConfig { fn parse_args() -> RelayConfig {
let mut config = RelayConfig::default(); let mut config = RelayConfig::default();
@@ -38,17 +40,57 @@ fn parse_args() -> RelayConfig {
.parse().expect("invalid --remote address"), .parse().expect("invalid --remote address"),
); );
} }
"--auth-url" => {
i += 1;
config.auth_url = Some(
args.get(i).expect("--auth-url requires a URL").to_string(),
);
}
"--metrics-port" => {
i += 1;
config.metrics_port = Some(
args.get(i).expect("--metrics-port requires a port number")
.parse().expect("invalid --metrics-port number"),
);
}
"--probe" => {
i += 1;
let addr: SocketAddr = args.get(i)
.expect("--probe requires an address")
.parse()
.expect("invalid --probe address");
config.probe_targets.push(addr);
}
"--probe-mesh" => {
config.probe_mesh = true;
}
"--trunking" => {
config.trunking_enabled = true;
}
"--mesh-status" => {
// Print mesh table from a fresh registry and exit.
// In practice this is useful after the relay has been running;
// here we just demonstrate the formatter with an empty registry.
let m = RelayMetrics::new();
print!("{}", wzp_relay::probe::mesh_summary(m.registry()));
std::process::exit(0);
}
"--help" | "-h" => { "--help" | "-h" => {
eprintln!("Usage: wzp-relay [--listen <addr>] [--remote <addr>]"); eprintln!("Usage: wzp-relay [--listen <addr>] [--remote <addr>] [--auth-url <url>] [--metrics-port <port>] [--probe <addr>]... [--probe-mesh] [--mesh-status]");
eprintln!(); eprintln!();
eprintln!("Options:"); eprintln!("Options:");
eprintln!(" --listen <addr> Listen address (default: 0.0.0.0:4433)"); eprintln!(" --listen <addr> Listen address (default: 0.0.0.0:4433)");
eprintln!(" --remote <addr> Remote relay for forwarding (disables room mode)"); eprintln!(" --remote <addr> Remote relay for forwarding (disables room mode)");
eprintln!(" --auth-url <url> featherChat auth endpoint (e.g., https://chat.example.com/v1/auth/validate)");
eprintln!(" When set, clients must send a bearer token as first signal message.");
eprintln!(" --metrics-port <port> Prometheus metrics HTTP port (e.g., 9090). Disabled if not set.");
eprintln!(" --probe <addr> Peer relay to probe for health monitoring (repeatable).");
eprintln!(" --probe-mesh Enable mesh mode (mark config flag, probes all --probe targets).");
eprintln!(" --mesh-status Print mesh health table and exit (diagnostic).");
eprintln!(" --trunking Enable trunk batching for outgoing media in room mode.");
eprintln!(); eprintln!();
eprintln!("Room mode (default):"); eprintln!("Room mode (default):");
eprintln!(" Clients join rooms by name. Packets are forwarded to all"); eprintln!(" Clients join rooms by name. Packets forwarded to all others (SFU).");
eprintln!(" other participants in the same room (SFU model).");
eprintln!(" Room name comes from QUIC SNI or defaults to 'default'.");
std::process::exit(0); std::process::exit(0);
} }
other => { other => {
@@ -134,7 +176,17 @@ async fn main() -> anyhow::Result<()> {
.install_default() .install_default()
.expect("failed to install rustls crypto provider"); .expect("failed to install rustls crypto provider");
info!(addr = %config.listen_addr, "WarzonePhone relay starting"); // Prometheus metrics
let metrics = Arc::new(RelayMetrics::new());
if let Some(port) = config.metrics_port {
let m = metrics.clone();
tokio::spawn(wzp_relay::metrics::serve_metrics(port, m));
}
// Generate ephemeral relay identity for crypto handshake
let relay_seed = wzp_crypto::Seed::generate();
let relay_fp = relay_seed.derive_identity().public_identity().fingerprint;
info!(addr = %config.listen_addr, fingerprint = %relay_fp, "WarzonePhone relay starting");
let (server_config, _cert) = wzp_transport::server_config(); let (server_config, _cert) = wzp_transport::server_config();
let endpoint = wzp_transport::create_endpoint(config.listen_addr, Some(server_config))?; let endpoint = wzp_transport::create_endpoint(config.listen_addr, Some(server_config))?;
@@ -154,6 +206,29 @@ async fn main() -> anyhow::Result<()> {
// Room manager (room mode only) // Room manager (room mode only)
let room_mgr = Arc::new(Mutex::new(RoomManager::new())); let room_mgr = Arc::new(Mutex::new(RoomManager::new()));
// Session manager — enforces max concurrent sessions
let session_mgr = Arc::new(Mutex::new(SessionManager::new(config.max_sessions)));
// Spawn inter-relay health probes via ProbeMesh coordinator
if !config.probe_targets.is_empty() {
let mesh = wzp_relay::probe::ProbeMesh::new(
config.probe_targets.clone(),
metrics.registry(),
);
info!(
targets = mesh.target_count(),
mesh = config.probe_mesh,
"spawning probe mesh"
);
tokio::spawn(async move { mesh.run_all().await });
}
if let Some(ref url) = config.auth_url {
info!(url, "auth enabled — clients must present featherChat token");
} else {
info!("auth disabled — any client can connect (use --auth-url to enable)");
}
info!("Listening for connections..."); info!("Listening for connections...");
loop { loop {
@@ -164,12 +239,15 @@ async fn main() -> anyhow::Result<()> {
let remote_transport = remote_transport.clone(); let remote_transport = remote_transport.clone();
let room_mgr = room_mgr.clone(); let room_mgr = room_mgr.clone();
let session_mgr = session_mgr.clone();
let auth_url = config.auth_url.clone();
let relay_seed_bytes = relay_seed.0;
let metrics = metrics.clone();
let trunking_enabled = config.trunking_enabled;
tokio::spawn(async move { tokio::spawn(async move {
let addr = connection.remote_address(); let addr = connection.remote_address();
// Extract room name from QUIC handshake data (SNI).
// The web bridge connects with the room name as server_name.
let room_name = connection let room_name = connection
.handshake_data() .handshake_data()
.and_then(|hd| { .and_then(|hd| {
@@ -180,7 +258,101 @@ async fn main() -> anyhow::Result<()> {
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection)); let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
info!(%addr, room = %room_name, "new client"); // Probe connections use SNI "_probe" to identify themselves.
// They skip auth + handshake and just do Ping->Pong.
if room_name == "_probe" {
info!(%addr, "probe connection detected, entering Ping/Pong responder");
loop {
match transport.recv_signal().await {
Ok(Some(wzp_proto::SignalMessage::Ping { timestamp_ms })) => {
if let Err(e) = transport.send_signal(
&wzp_proto::SignalMessage::Pong { timestamp_ms },
).await {
error!(%addr, "probe pong send error: {e}");
break;
}
}
Ok(Some(_)) => {
// Ignore non-Ping signals on probe connections
}
Ok(None) => {
info!(%addr, "probe connection closed");
break;
}
Err(e) => {
error!(%addr, "probe recv error: {e}");
break;
}
}
}
transport.close().await.ok();
return;
}
// Auth check: if --auth-url is set, expect first signal message to be a token
// Auth: if --auth-url is set, expect AuthToken as first signal
let authenticated_fp: Option<String> = if let Some(ref url) = auth_url {
info!(%addr, "waiting for auth token...");
match transport.recv_signal().await {
Ok(Some(wzp_proto::SignalMessage::AuthToken { token })) => {
match wzp_relay::auth::validate_token(url, &token).await {
Ok(client) => {
metrics.auth_attempts.with_label_values(&["ok"]).inc();
info!(
%addr,
fingerprint = %client.fingerprint,
alias = ?client.alias,
"authenticated"
);
Some(client.fingerprint)
}
Err(e) => {
metrics.auth_attempts.with_label_values(&["fail"]).inc();
error!(%addr, "auth failed: {e}");
transport.close().await.ok();
return;
}
}
}
Ok(Some(_)) => {
error!(%addr, "expected AuthToken as first signal, got something else");
transport.close().await.ok();
return;
}
Ok(None) => {
error!(%addr, "connection closed before auth");
return;
}
Err(e) => {
error!(%addr, "signal recv error during auth: {e}");
transport.close().await.ok();
return;
}
}
} else {
None
};
// Crypto handshake: verify client identity + negotiate quality profile
let handshake_start = std::time::Instant::now();
let (_crypto_session, _chosen_profile) = match wzp_relay::handshake::accept_handshake(
&*transport,
&relay_seed_bytes,
).await {
Ok(result) => {
let elapsed = handshake_start.elapsed().as_secs_f64();
metrics.handshake_duration.observe(elapsed);
info!(%addr, elapsed_ms = %(elapsed * 1000.0), "crypto handshake complete");
result
}
Err(e) => {
error!(%addr, "handshake failed: {e}");
transport.close().await.ok();
return;
}
};
info!(%addr, room = %room_name, "client joining");
if let Some(remote) = remote_transport { if let Some(remote) = remote_transport {
// Forward mode — same as before // Forward mode — same as before
@@ -211,19 +383,66 @@ async fn main() -> anyhow::Result<()> {
stats_handle.abort(); stats_handle.abort();
transport.close().await.ok(); transport.close().await.ok();
} else { } else {
// Room mode — join room and forward to all others // Room mode — enforce max sessions, then join room
let participant_id = { let session_id = {
let mut mgr = room_mgr.lock().await; let mut smgr = session_mgr.lock().await;
mgr.join(&room_name, addr, transport.clone()) match smgr.create_session(&room_name, authenticated_fp.clone()) {
Ok(id) => id,
Err(e) => {
error!(%addr, room = %room_name, "session rejected: {e}");
transport.close().await.ok();
return;
}
}
}; };
metrics.active_sessions.inc();
let participant_id = {
let mut mgr = room_mgr.lock().await;
match mgr.join(&room_name, addr, transport.clone(), authenticated_fp.as_deref()) {
Ok(id) => {
metrics.active_rooms.set(mgr.list().len() as i64);
id
}
Err(e) => {
error!(%addr, room = %room_name, "room join denied: {e}");
// Clean up the session we just created
metrics.active_sessions.dec();
let mut smgr = session_mgr.lock().await;
smgr.remove_session(session_id);
transport.close().await.ok();
return;
}
}
};
let session_id_str: String = session_id
.iter()
.map(|b| format!("{b:02x}"))
.collect();
room::run_participant( room::run_participant(
room_mgr.clone(), room_mgr.clone(),
room_name, room_name,
participant_id, participant_id,
transport.clone(), transport.clone(),
metrics.clone(),
&session_id_str,
trunking_enabled,
).await; ).await;
// Participant disconnected — clean up per-session metrics
metrics.remove_session_metrics(&session_id_str);
metrics.active_sessions.dec();
{
let mgr = room_mgr.lock().await;
metrics.active_rooms.set(mgr.list().len() as i64);
}
{
let mut smgr = session_mgr.lock().await;
smgr.remove_session(session_id);
}
transport.close().await.ok(); transport.close().await.ok();
} }
}); });

View File

@@ -0,0 +1,316 @@
//! Prometheus metrics for the WZP relay daemon.
use prometheus::{
Encoder, GaugeVec, Histogram, HistogramOpts, IntCounter, IntCounterVec, IntGauge, IntGaugeVec,
Opts, Registry, TextEncoder,
};
use wzp_proto::packet::QualityReport;
use std::sync::Arc;
/// All relay-level Prometheus metrics.
#[derive(Clone)]
pub struct RelayMetrics {
pub active_sessions: IntGauge,
pub active_rooms: IntGauge,
pub packets_forwarded: IntCounter,
pub bytes_forwarded: IntCounter,
pub auth_attempts: IntCounterVec,
pub handshake_duration: Histogram,
// Per-session metrics
pub session_buffer_depth: IntGaugeVec,
pub session_loss_pct: GaugeVec,
pub session_rtt_ms: GaugeVec,
pub session_underruns: IntCounterVec,
pub session_overruns: IntCounterVec,
registry: Registry,
}
impl RelayMetrics {
/// Create and register all relay metrics with a new registry.
pub fn new() -> Self {
let registry = Registry::new();
let active_sessions = IntGauge::with_opts(
Opts::new("wzp_relay_active_sessions", "Current active sessions"),
)
.expect("metric");
let active_rooms = IntGauge::with_opts(
Opts::new("wzp_relay_active_rooms", "Current active rooms"),
)
.expect("metric");
let packets_forwarded = IntCounter::with_opts(
Opts::new("wzp_relay_packets_forwarded_total", "Total packets forwarded"),
)
.expect("metric");
let bytes_forwarded = IntCounter::with_opts(
Opts::new("wzp_relay_bytes_forwarded_total", "Total bytes forwarded"),
)
.expect("metric");
let auth_attempts = IntCounterVec::new(
Opts::new("wzp_relay_auth_attempts_total", "Auth validation attempts"),
&["result"],
)
.expect("metric");
let handshake_duration = Histogram::with_opts(
HistogramOpts::new(
"wzp_relay_handshake_duration_seconds",
"Crypto handshake time",
)
.buckets(vec![0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5]),
)
.expect("metric");
let session_buffer_depth = IntGaugeVec::new(
Opts::new(
"wzp_relay_session_jitter_buffer_depth",
"Buffer depth per session",
),
&["session_id"],
)
.expect("metric");
let session_loss_pct = GaugeVec::new(
Opts::new(
"wzp_relay_session_loss_pct",
"Packet loss percentage per session",
),
&["session_id"],
)
.expect("metric");
let session_rtt_ms = GaugeVec::new(
Opts::new(
"wzp_relay_session_rtt_ms",
"Round-trip time per session",
),
&["session_id"],
)
.expect("metric");
let session_underruns = IntCounterVec::new(
Opts::new(
"wzp_relay_session_underruns_total",
"Jitter buffer underruns per session",
),
&["session_id"],
)
.expect("metric");
let session_overruns = IntCounterVec::new(
Opts::new(
"wzp_relay_session_overruns_total",
"Jitter buffer overruns per session",
),
&["session_id"],
)
.expect("metric");
registry.register(Box::new(active_sessions.clone())).expect("register");
registry.register(Box::new(active_rooms.clone())).expect("register");
registry.register(Box::new(packets_forwarded.clone())).expect("register");
registry.register(Box::new(bytes_forwarded.clone())).expect("register");
registry.register(Box::new(auth_attempts.clone())).expect("register");
registry.register(Box::new(handshake_duration.clone())).expect("register");
registry.register(Box::new(session_buffer_depth.clone())).expect("register");
registry.register(Box::new(session_loss_pct.clone())).expect("register");
registry.register(Box::new(session_rtt_ms.clone())).expect("register");
registry.register(Box::new(session_underruns.clone())).expect("register");
registry.register(Box::new(session_overruns.clone())).expect("register");
Self {
active_sessions,
active_rooms,
packets_forwarded,
bytes_forwarded,
auth_attempts,
handshake_duration,
session_buffer_depth,
session_loss_pct,
session_rtt_ms,
session_underruns,
session_overruns,
registry,
}
}
/// Update per-session quality metrics from a QualityReport.
pub fn update_session_quality(&self, session_id: &str, report: &QualityReport) {
self.session_loss_pct
.with_label_values(&[session_id])
.set(report.loss_percent() as f64);
self.session_rtt_ms
.with_label_values(&[session_id])
.set(report.rtt_ms() as f64);
}
/// Update per-session buffer metrics.
pub fn update_session_buffer(
&self,
session_id: &str,
depth: usize,
underruns: u64,
overruns: u64,
) {
self.session_buffer_depth
.with_label_values(&[session_id])
.set(depth as i64);
// IntCounterVec doesn't have a `set` — we inc by the delta.
// Since these are cumulative from the jitter buffer, we use inc_by
// with the current totals. To avoid double-counting, callers should
// track previous values externally. For simplicity the relay reports
// the absolute value each tick; counters only go up so we take the
// max(0, new - current) approach.
let cur_underruns = self
.session_underruns
.with_label_values(&[session_id])
.get();
if underruns > cur_underruns as u64 {
self.session_underruns
.with_label_values(&[session_id])
.inc_by(underruns - cur_underruns as u64);
}
let cur_overruns = self
.session_overruns
.with_label_values(&[session_id])
.get();
if overruns > cur_overruns as u64 {
self.session_overruns
.with_label_values(&[session_id])
.inc_by(overruns - cur_overruns as u64);
}
}
/// Remove all per-session label values for a disconnected session.
pub fn remove_session_metrics(&self, session_id: &str) {
let _ = self.session_buffer_depth.remove_label_values(&[session_id]);
let _ = self.session_loss_pct.remove_label_values(&[session_id]);
let _ = self.session_rtt_ms.remove_label_values(&[session_id]);
let _ = self.session_underruns.remove_label_values(&[session_id]);
let _ = self.session_overruns.remove_label_values(&[session_id]);
}
/// Get a reference to the underlying Prometheus registry.
/// Probe metrics are registered on this same registry so they appear in /metrics output.
pub fn registry(&self) -> &Registry {
&self.registry
}
/// Gather all metrics and encode them as Prometheus text format.
pub fn metrics_handler(&self) -> String {
let encoder = TextEncoder::new();
let metric_families = self.registry.gather();
let mut buffer = Vec::new();
encoder.encode(&metric_families, &mut buffer).expect("encode");
String::from_utf8(buffer).expect("utf8")
}
}
/// Start an HTTP server serving GET /metrics and GET /mesh on the given port.
pub async fn serve_metrics(port: u16, metrics: Arc<RelayMetrics>) {
use axum::{routing::get, Router};
let metrics_clone = metrics.clone();
let app = Router::new()
.route(
"/metrics",
get(move || {
let m = metrics.clone();
async move { m.metrics_handler() }
}),
)
.route(
"/mesh",
get(move || {
let m = metrics_clone.clone();
async move { crate::probe::mesh_summary(m.registry()) }
}),
);
let addr = std::net::SocketAddr::from(([0, 0, 0, 0], port));
let listener = tokio::net::TcpListener::bind(addr)
.await
.expect("failed to bind metrics port");
tracing::info!(%addr, "metrics endpoint serving");
axum::serve(listener, app)
.await
.expect("metrics server error");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn metrics_register() {
let m = RelayMetrics::new();
// Touch the CounterVec labels so they appear in output
m.auth_attempts.with_label_values(&["ok"]);
m.auth_attempts.with_label_values(&["fail"]);
let output = m.metrics_handler();
// Should contain all registered metric names (as HELP or TYPE lines)
assert!(output.contains("wzp_relay_active_sessions"));
assert!(output.contains("wzp_relay_active_rooms"));
assert!(output.contains("wzp_relay_packets_forwarded_total"));
assert!(output.contains("wzp_relay_bytes_forwarded_total"));
assert!(output.contains("wzp_relay_auth_attempts_total"));
assert!(output.contains("wzp_relay_handshake_duration_seconds"));
}
#[test]
fn session_quality_update() {
let m = RelayMetrics::new();
let report = QualityReport {
loss_pct: 128, // ~50%
rtt_4ms: 25, // 100ms
jitter_ms: 10,
bitrate_cap_kbps: 200,
};
m.update_session_quality("sess-abc", &report);
let output = m.metrics_handler();
assert!(output.contains("wzp_relay_session_loss_pct{session_id=\"sess-abc\"}"));
assert!(output.contains("wzp_relay_session_rtt_ms{session_id=\"sess-abc\"}"));
// Verify rtt value (25 * 4 = 100)
assert!(output.contains("wzp_relay_session_rtt_ms{session_id=\"sess-abc\"} 100"));
}
#[test]
fn session_metrics_cleanup() {
let m = RelayMetrics::new();
let report = QualityReport {
loss_pct: 50,
rtt_4ms: 10,
jitter_ms: 5,
bitrate_cap_kbps: 100,
};
m.update_session_quality("sess-cleanup", &report);
m.update_session_buffer("sess-cleanup", 42, 3, 1);
// Verify they appear
let output = m.metrics_handler();
assert!(output.contains("sess-cleanup"));
// Remove and verify they are gone
m.remove_session_metrics("sess-cleanup");
let output = m.metrics_handler();
assert!(!output.contains("sess-cleanup"));
}
#[test]
fn metrics_increment() {
let m = RelayMetrics::new();
m.active_sessions.set(5);
m.active_rooms.set(2);
m.packets_forwarded.inc_by(100);
m.bytes_forwarded.inc_by(48000);
m.auth_attempts.with_label_values(&["ok"]).inc();
m.auth_attempts.with_label_values(&["fail"]).inc_by(3);
m.handshake_duration.observe(0.042);
let output = m.metrics_handler();
assert!(output.contains("wzp_relay_active_sessions 5"));
assert!(output.contains("wzp_relay_active_rooms 2"));
assert!(output.contains("wzp_relay_packets_forwarded_total 100"));
assert!(output.contains("wzp_relay_bytes_forwarded_total 48000"));
assert!(output.contains("wzp_relay_auth_attempts_total{result=\"ok\"} 1"));
assert!(output.contains("wzp_relay_auth_attempts_total{result=\"fail\"} 3"));
assert!(output.contains("wzp_relay_handshake_duration_seconds_count 1"));
}
}

View File

@@ -0,0 +1,592 @@
//! Inter-relay health probe.
//!
//! A `ProbeRunner` maintains a persistent QUIC connection to a peer relay,
//! sends 1 Ping/s, and measures RTT, loss, and jitter. Results are exported
//! as Prometheus gauges with a `target` label.
use std::collections::VecDeque;
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use prometheus::{Gauge, IntGauge, Opts, Registry};
use tokio::sync::Mutex;
use tracing::{error, info, warn};
use wzp_proto::{MediaTransport, SignalMessage};
/// Configuration for a single probe target.
#[derive(Clone, Debug)]
pub struct ProbeConfig {
pub target: SocketAddr,
pub interval: Duration,
}
impl ProbeConfig {
pub fn new(target: SocketAddr) -> Self {
Self {
target,
interval: Duration::from_secs(1),
}
}
}
/// Prometheus metrics for one probe target.
pub struct ProbeMetrics {
pub rtt_ms: Gauge,
pub loss_pct: Gauge,
pub jitter_ms: Gauge,
pub up: IntGauge,
}
impl ProbeMetrics {
/// Register probe metrics with the given `target` label value.
pub fn register(target: &str, registry: &Registry) -> Self {
let rtt_ms = Gauge::with_opts(
Opts::new("wzp_probe_rtt_ms", "RTT to peer relay in ms")
.const_label("target", target),
)
.expect("probe metric");
let loss_pct = Gauge::with_opts(
Opts::new("wzp_probe_loss_pct", "Packet loss to peer relay in %")
.const_label("target", target),
)
.expect("probe metric");
let jitter_ms = Gauge::with_opts(
Opts::new("wzp_probe_jitter_ms", "Jitter to peer relay in ms")
.const_label("target", target),
)
.expect("probe metric");
let up = IntGauge::with_opts(
Opts::new("wzp_probe_up", "1 if peer relay is reachable, 0 if not")
.const_label("target", target),
)
.expect("probe metric");
registry.register(Box::new(rtt_ms.clone())).expect("register");
registry.register(Box::new(loss_pct.clone())).expect("register");
registry.register(Box::new(jitter_ms.clone())).expect("register");
registry.register(Box::new(up.clone())).expect("register");
Self {
rtt_ms,
loss_pct,
jitter_ms,
up,
}
}
}
/// Sliding window for tracking probe results over the last N pings.
pub struct SlidingWindow {
/// Capacity (number of pings to track).
capacity: usize,
/// Timestamps of sent pings (ms since epoch) in order.
sent: VecDeque<u64>,
/// RTT values for received pongs (ms). None = no pong received yet.
rtts: VecDeque<Option<f64>>,
}
impl SlidingWindow {
pub fn new(capacity: usize) -> Self {
Self {
capacity,
sent: VecDeque::with_capacity(capacity),
rtts: VecDeque::with_capacity(capacity),
}
}
/// Record a sent ping.
pub fn record_sent(&mut self, timestamp_ms: u64) {
if self.sent.len() >= self.capacity {
self.sent.pop_front();
self.rtts.pop_front();
}
self.sent.push_back(timestamp_ms);
self.rtts.push_back(None);
}
/// Record a received pong. Returns the computed RTT in ms, or None if
/// the timestamp doesn't match any pending ping.
pub fn record_pong(&mut self, timestamp_ms: u64, now_ms: u64) -> Option<f64> {
// Find the sent ping with this timestamp
for (i, &sent_ts) in self.sent.iter().enumerate() {
if sent_ts == timestamp_ms {
let rtt = (now_ms as f64) - (sent_ts as f64);
self.rtts[i] = Some(rtt);
return Some(rtt);
}
}
None
}
/// Compute loss percentage (0.0-100.0) from the current window.
/// A ping is considered lost if it has no matching pong.
pub fn loss_pct(&self) -> f64 {
if self.sent.is_empty() {
return 0.0;
}
let total = self.rtts.len() as f64;
let lost = self.rtts.iter().filter(|r| r.is_none()).count() as f64;
(lost / total) * 100.0
}
/// Compute jitter as the standard deviation of RTT values (ms).
/// Only considers pings that received a pong.
pub fn jitter_ms(&self) -> f64 {
let rtts: Vec<f64> = self.rtts.iter().filter_map(|r| *r).collect();
if rtts.len() < 2 {
return 0.0;
}
let mean = rtts.iter().sum::<f64>() / rtts.len() as f64;
let variance = rtts.iter().map(|r| (r - mean).powi(2)).sum::<f64>() / rtts.len() as f64;
variance.sqrt()
}
/// Return the most recent RTT value, if any.
pub fn latest_rtt(&self) -> Option<f64> {
self.rtts.iter().rev().find_map(|r| *r)
}
}
/// Runs a health probe against a single peer relay.
pub struct ProbeRunner {
config: ProbeConfig,
metrics: ProbeMetrics,
}
impl ProbeRunner {
/// Create a new probe runner, registering metrics with the given registry.
pub fn new(config: ProbeConfig, registry: &Registry) -> Self {
let target_str = config.target.to_string();
let metrics = ProbeMetrics::register(&target_str, registry);
Self { config, metrics }
}
/// Run the probe forever. This function never returns under normal operation.
/// It connects to the target relay, sends Ping every `interval`, and processes
/// Pong replies to compute RTT, loss, and jitter.
pub async fn run(&self) -> ! {
loop {
info!(target = %self.config.target, "probe connecting...");
match self.run_session().await {
Ok(()) => {
// Session ended cleanly (shouldn't happen in practice)
warn!(target = %self.config.target, "probe session ended, reconnecting in 5s");
}
Err(e) => {
error!(target = %self.config.target, "probe session error: {e}, reconnecting in 5s");
}
}
self.metrics.up.set(0);
self.metrics.rtt_ms.set(0.0);
tokio::time::sleep(Duration::from_secs(5)).await;
}
}
/// Run one probe session (one QUIC connection). Returns when the connection drops.
async fn run_session(&self) -> anyhow::Result<()> {
// Create a client-only endpoint on an ephemeral port
let bind_addr: SocketAddr = "0.0.0.0:0".parse().unwrap();
let endpoint = wzp_transport::create_endpoint(bind_addr, None)?;
let client_cfg = wzp_transport::client_config();
let conn = wzp_transport::connect(
&endpoint,
self.config.target,
"_probe",
client_cfg,
)
.await?;
let transport = Arc::new(wzp_transport::QuinnTransport::new(conn));
self.metrics.up.set(1);
info!(target = %self.config.target, "probe connected");
let window = Arc::new(Mutex::new(SlidingWindow::new(60)));
// Spawn recv task for pong messages
let recv_transport = transport.clone();
let recv_window = window.clone();
let rtt_gauge = self.metrics.rtt_ms.clone();
let loss_gauge = self.metrics.loss_pct.clone();
let jitter_gauge = self.metrics.jitter_ms.clone();
let up_gauge = self.metrics.up.clone();
let recv_handle = tokio::spawn(async move {
loop {
match recv_transport.recv_signal().await {
Ok(Some(SignalMessage::Pong { timestamp_ms })) => {
let now_ms = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis() as u64;
let mut w = recv_window.lock().await;
if let Some(rtt) = w.record_pong(timestamp_ms, now_ms) {
rtt_gauge.set(rtt);
}
loss_gauge.set(w.loss_pct());
jitter_gauge.set(w.jitter_ms());
}
Ok(Some(_)) => {
// Ignore non-Pong signals
}
Ok(None) => {
info!("probe recv: connection closed");
up_gauge.set(0);
break;
}
Err(e) => {
error!("probe recv error: {e}");
up_gauge.set(0);
break;
}
}
}
});
// Send ping loop
let mut interval = tokio::time::interval(self.config.interval);
loop {
interval.tick().await;
if recv_handle.is_finished() {
// Recv task died — connection is lost
return Ok(());
}
let timestamp_ms = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis() as u64;
{
let mut w = window.lock().await;
w.record_sent(timestamp_ms);
}
if let Err(e) = transport
.send_signal(&SignalMessage::Ping { timestamp_ms })
.await
{
error!(target = %self.config.target, "probe ping send error: {e}");
recv_handle.abort();
return Err(e.into());
}
}
}
}
/// Coordinates multiple `ProbeRunner` instances for mesh mode.
///
/// Each relay probes all configured peers concurrently. The `ProbeMesh` owns the
/// runners and spawns them as independent tokio tasks.
pub struct ProbeMesh {
runners: Vec<ProbeRunner>,
}
impl ProbeMesh {
/// Create a new mesh coordinator, registering metrics for every target.
pub fn new(targets: Vec<SocketAddr>, registry: &Registry) -> Self {
let runners = targets
.into_iter()
.map(|addr| {
let config = ProbeConfig::new(addr);
ProbeRunner::new(config, registry)
})
.collect();
Self { runners }
}
/// Spawn all runners as concurrent tokio tasks. This consumes the mesh.
pub async fn run_all(self) {
let mut handles = Vec::with_capacity(self.runners.len());
for runner in self.runners {
let target = runner.config.target;
info!(target = %target, "spawning mesh probe");
handles.push(tokio::spawn(async move { runner.run().await }));
}
// Probes run forever; if we ever need to wait:
for h in handles {
let _ = h.await;
}
}
/// Number of probe targets in this mesh.
pub fn target_count(&self) -> usize {
self.runners.len()
}
}
/// Build a human-readable mesh health table from probe metrics in the registry.
///
/// Scans the registry for `wzp_probe_*` gauges and formats them into a table.
pub fn mesh_summary(registry: &Registry) -> String {
use std::collections::BTreeMap;
let families = registry.gather();
// Collect per-target values: target -> (rtt, loss, jitter, up)
let mut targets: BTreeMap<String, (f64, f64, f64, bool)> = BTreeMap::new();
for family in &families {
let name = family.get_name();
for metric in family.get_metric() {
// Find the "target" label
let target_label = metric
.get_label()
.iter()
.find(|l| l.get_name() == "target");
let target = match target_label {
Some(l) => l.get_value().to_string(),
None => continue,
};
let entry = targets.entry(target).or_insert((0.0, 0.0, 0.0, false));
match name {
"wzp_probe_rtt_ms" => entry.0 = metric.get_gauge().get_value(),
"wzp_probe_loss_pct" => entry.1 = metric.get_gauge().get_value(),
"wzp_probe_jitter_ms" => entry.2 = metric.get_gauge().get_value(),
"wzp_probe_up" => entry.3 = metric.get_gauge().get_value() as i64 == 1,
_ => {}
}
}
}
let mut out = String::new();
out.push_str("Relay Mesh Health\n");
out.push_str("\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\u{2500}\n");
out.push_str(&format!(
"{:<20} {:>6} {:>6} {:>7} {}\n",
"Target", "RTT", "Loss", "Jitter", "Status"
));
for (target, (rtt, loss, jitter, up)) in &targets {
let status = if *up { "UP" } else { "DOWN" };
out.push_str(&format!(
"{:<20} {:>5.0}ms {:>5.1}% {:>5.0}ms {}\n",
target, rtt, loss, jitter, status
));
}
if targets.is_empty() {
out.push_str(" (no probe targets configured)\n");
}
out
}
/// Handle an incoming Ping signal by replying with a Pong carrying the same timestamp.
/// Returns true if the message was a Ping and was handled, false otherwise.
pub async fn handle_ping(
transport: &wzp_transport::QuinnTransport,
msg: &SignalMessage,
) -> bool {
if let SignalMessage::Ping { timestamp_ms } = msg {
if let Err(e) = transport
.send_signal(&SignalMessage::Pong {
timestamp_ms: *timestamp_ms,
})
.await
{
warn!("failed to send Pong reply: {e}");
}
true
} else {
false
}
}
#[cfg(test)]
mod tests {
use super::*;
use prometheus::Encoder;
#[test]
fn probe_metrics_register() {
let registry = Registry::new();
let _metrics = ProbeMetrics::register("127.0.0.1:4433", &registry);
let encoder = prometheus::TextEncoder::new();
let families = registry.gather();
let mut buf = Vec::new();
encoder.encode(&families, &mut buf).unwrap();
let output = String::from_utf8(buf).unwrap();
assert!(output.contains("wzp_probe_rtt_ms"), "missing wzp_probe_rtt_ms");
assert!(output.contains("wzp_probe_loss_pct"), "missing wzp_probe_loss_pct");
assert!(output.contains("wzp_probe_jitter_ms"), "missing wzp_probe_jitter_ms");
assert!(output.contains("wzp_probe_up"), "missing wzp_probe_up");
assert!(
output.contains("target=\"127.0.0.1:4433\""),
"missing target label"
);
}
#[test]
fn rtt_calculation() {
let mut window = SlidingWindow::new(60);
// Send a ping at t=1000
window.record_sent(1000);
// Receive pong at t=1050 => RTT = 50ms
let rtt = window.record_pong(1000, 1050);
assert_eq!(rtt, Some(50.0));
// Send at t=2000, receive at t=2030 => RTT = 30ms
window.record_sent(2000);
let rtt = window.record_pong(2000, 2030);
assert_eq!(rtt, Some(30.0));
assert_eq!(window.latest_rtt(), Some(30.0));
// Unknown timestamp returns None
let rtt = window.record_pong(9999, 10000);
assert!(rtt.is_none());
}
#[test]
fn loss_calculation() {
let mut window = SlidingWindow::new(10);
// Send 10 pings
for i in 0..10 {
window.record_sent(i * 1000);
}
// Receive pongs for 7 out of 10 (miss indices 2, 5, 8)
for i in 0..10u64 {
if i == 2 || i == 5 || i == 8 {
continue; // lost
}
window.record_pong(i * 1000, i * 1000 + 40);
}
// 3 out of 10 lost = 30%
let loss = window.loss_pct();
assert!((loss - 30.0).abs() < 0.01, "expected ~30%, got {loss}");
}
#[test]
fn jitter_calculation() {
let mut window = SlidingWindow::new(10);
// Send 4 pings with known RTTs: 10, 20, 30, 40
// Mean = 25, variance = ((15^2 + 5^2 + 5^2 + 15^2) / 4) = (225+25+25+225)/4 = 125
// std dev = sqrt(125) ≈ 11.18
let rtts = [10.0, 20.0, 30.0, 40.0];
for (i, rtt) in rtts.iter().enumerate() {
let sent = (i as u64) * 1000;
window.record_sent(sent);
window.record_pong(sent, sent + *rtt as u64);
}
let jitter = window.jitter_ms();
assert!(
(jitter - 11.18).abs() < 0.1,
"expected jitter ~11.18ms, got {jitter}"
);
}
#[test]
fn sliding_window_eviction() {
let mut window = SlidingWindow::new(5);
// Fill window
for i in 0..5 {
window.record_sent(i * 1000);
}
assert_eq!(window.sent.len(), 5);
// Add one more — oldest should be evicted
window.record_sent(5000);
assert_eq!(window.sent.len(), 5);
assert_eq!(*window.sent.front().unwrap(), 1000);
// All 5 are unanswered
assert!((window.loss_pct() - 100.0).abs() < 0.01);
}
#[test]
fn empty_window_edge_cases() {
let window = SlidingWindow::new(60);
assert_eq!(window.loss_pct(), 0.0);
assert_eq!(window.jitter_ms(), 0.0);
assert!(window.latest_rtt().is_none());
}
#[test]
fn mesh_creates_runners() {
let registry = Registry::new();
let targets: Vec<SocketAddr> = vec![
"127.0.0.1:4433".parse().unwrap(),
"127.0.0.2:4433".parse().unwrap(),
"127.0.0.3:4433".parse().unwrap(),
];
let mesh = ProbeMesh::new(targets, &registry);
assert_eq!(mesh.target_count(), 3);
// Verify metrics were registered for each target
let encoder = prometheus::TextEncoder::new();
let families = registry.gather();
let mut buf = Vec::new();
encoder.encode(&families, &mut buf).unwrap();
let output = String::from_utf8(buf).unwrap();
assert!(output.contains("target=\"127.0.0.1:4433\""));
assert!(output.contains("target=\"127.0.0.2:4433\""));
assert!(output.contains("target=\"127.0.0.3:4433\""));
}
#[test]
fn mesh_summary_empty() {
let registry = Registry::new();
let summary = mesh_summary(&registry);
// Should contain the header
assert!(summary.contains("Relay Mesh Health"));
assert!(summary.contains("Target"));
assert!(summary.contains("RTT"));
assert!(summary.contains("Loss"));
assert!(summary.contains("Jitter"));
assert!(summary.contains("Status"));
// Should indicate no targets
assert!(summary.contains("no probe targets configured"));
}
#[test]
fn mesh_summary_with_targets() {
let registry = Registry::new();
// Register probe metrics for two targets and set values
let m1 = ProbeMetrics::register("relay-b:4433", &registry);
m1.rtt_ms.set(12.0);
m1.loss_pct.set(0.0);
m1.jitter_ms.set(2.0);
m1.up.set(1);
let m2 = ProbeMetrics::register("relay-c:4433", &registry);
m2.rtt_ms.set(45.0);
m2.loss_pct.set(0.1);
m2.jitter_ms.set(5.0);
m2.up.set(0);
let summary = mesh_summary(&registry);
assert!(summary.contains("relay-b:4433"));
assert!(summary.contains("relay-c:4433"));
assert!(summary.contains("UP"));
assert!(summary.contains("DOWN"));
// Should NOT contain "no probe targets"
assert!(!summary.contains("no probe targets configured"));
}
#[test]
fn mesh_zero_targets() {
let registry = Registry::new();
let mesh = ProbeMesh::new(vec![], &registry);
assert_eq!(mesh.target_count(), 0);
}
}

View File

@@ -3,15 +3,21 @@
//! Each room holds N participants. When one participant sends a media packet, //! Each room holds N participants. When one participant sends a media packet,
//! the relay forwards it to all other participants in the room (SFU model). //! the relay forwards it to all other participants in the room (SFU model).
use std::collections::HashMap; use std::collections::{HashMap, HashSet};
use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use bytes::Bytes;
use tokio::sync::Mutex; use tokio::sync::Mutex;
use tracing::{error, info}; use tracing::{error, info, warn};
use wzp_proto::packet::TrunkFrame;
use wzp_proto::MediaTransport; use wzp_proto::MediaTransport;
use crate::metrics::RelayMetrics;
use crate::trunk::TrunkBatcher;
/// Unique participant ID within a room. /// Unique participant ID within a room.
pub type ParticipantId = u64; pub type ParticipantId = u64;
@@ -24,7 +30,7 @@ fn next_id() -> ParticipantId {
/// A participant in a room. /// A participant in a room.
struct Participant { struct Participant {
id: ParticipantId, id: ParticipantId,
addr: std::net::SocketAddr, _addr: std::net::SocketAddr,
transport: Arc<wzp_transport::QuinnTransport>, transport: Arc<wzp_transport::QuinnTransport>,
} }
@@ -43,7 +49,7 @@ impl Room {
fn add(&mut self, addr: std::net::SocketAddr, transport: Arc<wzp_transport::QuinnTransport>) -> ParticipantId { fn add(&mut self, addr: std::net::SocketAddr, transport: Arc<wzp_transport::QuinnTransport>) -> ParticipantId {
let id = next_id(); let id = next_id();
info!(room_size = self.participants.len() + 1, participant = id, %addr, "joined room"); info!(room_size = self.participants.len() + 1, participant = id, %addr, "joined room");
self.participants.push(Participant { id, addr, transport }); self.participants.push(Participant { id, _addr: addr, transport });
id id
} }
@@ -72,24 +78,67 @@ impl Room {
/// Manages all rooms on the relay. /// Manages all rooms on the relay.
pub struct RoomManager { pub struct RoomManager {
rooms: HashMap<String, Room>, rooms: HashMap<String, Room>,
/// Room access control list. Maps hashed room name → allowed fingerprints.
/// When `None`, rooms are open (no auth mode). When `Some`, only listed
/// fingerprints can join the corresponding room.
acl: Option<HashMap<String, HashSet<String>>>,
} }
impl RoomManager { impl RoomManager {
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
rooms: HashMap::new(), rooms: HashMap::new(),
acl: None,
} }
} }
/// Join a room. Returns the participant ID. /// Create a room manager with ACL enforcement enabled.
pub fn with_acl() -> Self {
Self {
rooms: HashMap::new(),
acl: Some(HashMap::new()),
}
}
/// Grant a fingerprint access to a room.
pub fn allow(&mut self, room_name: &str, fingerprint: &str) {
if let Some(ref mut acl) = self.acl {
acl.entry(room_name.to_string())
.or_default()
.insert(fingerprint.to_string());
}
}
/// Check if a fingerprint is authorized to join a room.
/// Returns true if ACL is disabled (open mode) or the fingerprint is in the allow list.
pub fn is_authorized(&self, room_name: &str, fingerprint: Option<&str>) -> bool {
match (&self.acl, fingerprint) {
(None, _) => true, // no ACL = open
(Some(_), None) => false, // ACL enabled but no fingerprint
(Some(acl), Some(fp)) => {
// Room not in ACL = open room (allow anyone authenticated)
match acl.get(room_name) {
None => true,
Some(allowed) => allowed.contains(fp),
}
}
}
}
/// Join a room. Returns the participant ID or an error if unauthorized.
pub fn join( pub fn join(
&mut self, &mut self,
room_name: &str, room_name: &str,
addr: std::net::SocketAddr, addr: std::net::SocketAddr,
transport: Arc<wzp_transport::QuinnTransport>, transport: Arc<wzp_transport::QuinnTransport>,
) -> ParticipantId { fingerprint: Option<&str>,
) -> Result<ParticipantId, String> {
if !self.is_authorized(room_name, fingerprint) {
warn!(room = room_name, fingerprint = ?fingerprint, "unauthorized room join attempt");
return Err("not authorized for this room".to_string());
}
let room = self.rooms.entry(room_name.to_string()).or_insert_with(Room::new); let room = self.rooms.entry(room_name.to_string()).or_insert_with(Room::new);
room.add(addr, transport) Ok(room.add(addr, transport))
} }
/// Leave a room. Removes the room if empty. /// Leave a room. Removes the room if empty.
@@ -126,13 +175,100 @@ impl RoomManager {
} }
} }
// ---------------------------------------------------------------------------
// TrunkedForwarder — wraps a transport and batches outgoing media into trunk
// frames so multiple packets ride a single QUIC datagram.
// ---------------------------------------------------------------------------
/// Wraps a [`QuinnTransport`] with a [`TrunkBatcher`] so that small media
/// packets are accumulated and sent together in a single QUIC datagram.
pub struct TrunkedForwarder {
transport: Arc<wzp_transport::QuinnTransport>,
batcher: TrunkBatcher,
session_id: [u8; 2],
}
impl TrunkedForwarder {
/// Create a new trunked forwarder.
///
/// `session_id` tags every entry pushed into the batcher so the receiver
/// can demultiplex packets by session.
pub fn new(transport: Arc<wzp_transport::QuinnTransport>, session_id: [u8; 2]) -> Self {
Self {
transport,
batcher: TrunkBatcher::new(),
session_id,
}
}
/// Push a media packet into the batcher. If the batcher is full it will
/// flush automatically and the resulting trunk frame is sent immediately.
pub async fn send(&mut self, pkt: &wzp_proto::MediaPacket) -> anyhow::Result<()> {
let payload: Bytes = pkt.to_bytes();
if let Some(frame) = self.batcher.push(self.session_id, payload) {
self.send_frame(&frame)?;
}
Ok(())
}
/// Flush any pending packets — called on the 5 ms timer tick.
pub async fn flush(&mut self) -> anyhow::Result<()> {
if let Some(frame) = self.batcher.flush() {
self.send_frame(&frame)?;
}
Ok(())
}
/// Return the flush interval configured on the inner batcher.
pub fn flush_interval(&self) -> Duration {
self.batcher.flush_interval
}
fn send_frame(&self, frame: &TrunkFrame) -> anyhow::Result<()> {
self.transport.send_trunk(frame).map_err(|e| anyhow::anyhow!(e))
}
}
// ---------------------------------------------------------------------------
// run_participant — the hot-path forwarding loop
// ---------------------------------------------------------------------------
/// Run the receive loop for one participant in a room. /// Run the receive loop for one participant in a room.
/// Forwards all received packets to every other participant. /// Forwards all received packets to every other participant.
///
/// When `trunking_enabled` is true, outgoing packets are accumulated per-peer
/// into [`TrunkedForwarder`]s and flushed every 5 ms or when the batcher is
/// full, reducing QUIC datagram overhead.
pub async fn run_participant( pub async fn run_participant(
room_mgr: Arc<Mutex<RoomManager>>, room_mgr: Arc<Mutex<RoomManager>>,
room_name: String, room_name: String,
participant_id: ParticipantId, participant_id: ParticipantId,
transport: Arc<wzp_transport::QuinnTransport>, transport: Arc<wzp_transport::QuinnTransport>,
metrics: Arc<RelayMetrics>,
session_id: &str,
trunking_enabled: bool,
) {
if trunking_enabled {
run_participant_trunked(
room_mgr, room_name, participant_id, transport, metrics, session_id,
)
.await;
} else {
run_participant_plain(
room_mgr, room_name, participant_id, transport, metrics, session_id,
)
.await;
}
}
/// Plain (non-trunked) forwarding loop — original behaviour.
async fn run_participant_plain(
room_mgr: Arc<Mutex<RoomManager>>,
room_name: String,
participant_id: ParticipantId,
transport: Arc<wzp_transport::QuinnTransport>,
metrics: Arc<RelayMetrics>,
session_id: &str,
) { ) {
let addr = transport.connection().remote_address(); let addr = transport.connection().remote_address();
let mut packets_forwarded = 0u64; let mut packets_forwarded = 0u64;
@@ -145,11 +281,21 @@ pub async fn run_participant(
break; break;
} }
Err(e) => { Err(e) => {
error!(%addr, participant = participant_id, "recv error: {e}"); let msg = e.to_string();
if msg.contains("timed out") || msg.contains("reset") || msg.contains("closed") {
info!(%addr, participant = participant_id, "connection closed: {e}");
} else {
error!(%addr, participant = participant_id, "recv error: {e}");
}
break; break;
} }
}; };
// Update per-session quality metrics if a quality report is present
if let Some(ref report) = pkt.quality_report {
metrics.update_session_quality(session_id, report);
}
// Get current list of other participants // Get current list of other participants
let others = { let others = {
let mgr = room_mgr.lock().await; let mgr = room_mgr.lock().await;
@@ -157,6 +303,7 @@ pub async fn run_participant(
}; };
// Forward to all others // Forward to all others
let pkt_bytes = pkt.payload.len() as u64;
for other in &others { for other in &others {
// Best-effort: if one send fails, continue to others // Best-effort: if one send fails, continue to others
if let Err(e) = other.send_media(&pkt).await { if let Err(e) = other.send_media(&pkt).await {
@@ -165,6 +312,9 @@ pub async fn run_participant(
} }
} }
let fan_out = others.len() as u64;
metrics.packets_forwarded.inc_by(fan_out);
metrics.bytes_forwarded.inc_by(pkt_bytes * fan_out);
packets_forwarded += 1; packets_forwarded += 1;
if packets_forwarded % 500 == 0 { if packets_forwarded % 500 == 0 {
let room_size = { let room_size = {
@@ -186,6 +336,120 @@ pub async fn run_participant(
mgr.leave(&room_name, participant_id); mgr.leave(&room_name, participant_id);
} }
/// Trunked forwarding loop — batches outgoing packets per peer.
async fn run_participant_trunked(
room_mgr: Arc<Mutex<RoomManager>>,
room_name: String,
participant_id: ParticipantId,
transport: Arc<wzp_transport::QuinnTransport>,
metrics: Arc<RelayMetrics>,
session_id: &str,
) {
use std::collections::HashMap;
let addr = transport.connection().remote_address();
let mut packets_forwarded = 0u64;
// Per-peer TrunkedForwarders, keyed by the raw pointer of the peer
// transport (stable for the Arc's lifetime). We use the remote address
// string as the key since it is unique per connection.
let mut forwarders: HashMap<std::net::SocketAddr, TrunkedForwarder> = HashMap::new();
// Derive a 2-byte session tag from the session_id hex string.
let sid_bytes: [u8; 2] = parse_session_id_bytes(session_id);
let mut flush_interval = tokio::time::interval(Duration::from_millis(5));
// Don't let missed ticks pile up — skip them and move on.
flush_interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Skip);
loop {
tokio::select! {
biased;
result = transport.recv_media() => {
let pkt = match result {
Ok(Some(pkt)) => pkt,
Ok(None) => {
info!(%addr, participant = participant_id, "disconnected");
break;
}
Err(e) => {
error!(%addr, participant = participant_id, "recv error: {e}");
break;
}
};
if let Some(ref report) = pkt.quality_report {
metrics.update_session_quality(session_id, report);
}
let others = {
let mgr = room_mgr.lock().await;
mgr.others(&room_name, participant_id)
};
let pkt_bytes = pkt.payload.len() as u64;
for other in &others {
let peer_addr = other.connection().remote_address();
let fwd = forwarders
.entry(peer_addr)
.or_insert_with(|| TrunkedForwarder::new(other.clone(), sid_bytes));
if let Err(e) = fwd.send(&pkt).await {
let _ = e;
}
}
let fan_out = others.len() as u64;
metrics.packets_forwarded.inc_by(fan_out);
metrics.bytes_forwarded.inc_by(pkt_bytes * fan_out);
packets_forwarded += 1;
if packets_forwarded % 500 == 0 {
let room_size = {
let mgr = room_mgr.lock().await;
mgr.room_size(&room_name)
};
info!(
room = %room_name,
participant = participant_id,
forwarded = packets_forwarded,
room_size,
"participant stats (trunked)"
);
}
}
_ = flush_interval.tick() => {
for fwd in forwarders.values_mut() {
if let Err(e) = fwd.flush().await {
let _ = e;
}
}
}
}
}
// Final flush — send any remaining buffered packets.
for fwd in forwarders.values_mut() {
let _ = fwd.flush().await;
}
let mut mgr = room_mgr.lock().await;
mgr.leave(&room_name, participant_id);
}
/// Parse up to the first 2 bytes of a hex session-id string into `[u8; 2]`.
fn parse_session_id_bytes(session_id: &str) -> [u8; 2] {
let bytes: Vec<u8> = (0..session_id.len())
.step_by(2)
.filter_map(|i| u8::from_str_radix(session_id.get(i..i + 2)?, 16).ok())
.collect();
let mut out = [0u8; 2];
for (i, b) in bytes.iter().take(2).enumerate() {
out[i] = *b;
}
out
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@@ -193,8 +457,125 @@ mod tests {
#[test] #[test]
fn room_join_leave() { fn room_join_leave() {
let mut mgr = RoomManager::new(); let mut mgr = RoomManager::new();
// Can't test with real transports, but test the room logic
assert_eq!(mgr.room_size("test"), 0); assert_eq!(mgr.room_size("test"), 0);
assert!(mgr.list().is_empty()); assert!(mgr.list().is_empty());
} }
#[test]
fn acl_open_mode_allows_all() {
let mgr = RoomManager::new();
assert!(mgr.is_authorized("any-room", None));
assert!(mgr.is_authorized("any-room", Some("abc")));
}
#[test]
fn acl_enforced_requires_fingerprint() {
let mgr = RoomManager::with_acl();
assert!(!mgr.is_authorized("room1", None));
// Room not in ACL = open to any authenticated user
assert!(mgr.is_authorized("room1", Some("abc")));
}
#[test]
fn acl_restricts_to_allowed() {
let mut mgr = RoomManager::with_acl();
mgr.allow("room1", "alice");
mgr.allow("room1", "bob");
assert!(mgr.is_authorized("room1", Some("alice")));
assert!(mgr.is_authorized("room1", Some("bob")));
assert!(!mgr.is_authorized("room1", Some("eve")));
}
#[test]
fn parse_session_id_bytes_works() {
assert_eq!(parse_session_id_bytes("abcd"), [0xab, 0xcd]);
assert_eq!(parse_session_id_bytes("ff00"), [0xff, 0x00]);
assert_eq!(parse_session_id_bytes(""), [0x00, 0x00]);
// Longer hex strings: only first 2 bytes taken
assert_eq!(parse_session_id_bytes("aabbccdd"), [0xaa, 0xbb]);
}
/// Helper: create a minimal MediaPacket with the given payload bytes.
fn make_test_packet(payload: &[u8]) -> wzp_proto::MediaPacket {
wzp_proto::MediaPacket {
header: wzp_proto::packet::MediaHeader {
version: 0,
is_repair: false,
codec_id: wzp_proto::CodecId::Opus16k,
has_quality_report: false,
fec_ratio_encoded: 0,
seq: 1,
timestamp: 100,
fec_block: 0,
fec_symbol: 0,
reserved: 0,
csrc_count: 0,
},
payload: Bytes::from(payload.to_vec()),
quality_report: None,
}
}
/// Push 3 packets into a batcher (simulating TrunkedForwarder.send),
/// then flush and verify all 3 appear in a single TrunkFrame.
#[test]
fn trunked_forwarder_batches() {
let session_id: [u8; 2] = [0x00, 0x01];
let mut batcher = TrunkBatcher::new();
// Ensure max_entries is high enough that 3 packets don't auto-flush.
batcher.max_entries = 10;
batcher.max_bytes = 4096;
let pkts = [
make_test_packet(b"aaa"),
make_test_packet(b"bbb"),
make_test_packet(b"ccc"),
];
for pkt in &pkts {
let payload = pkt.to_bytes();
let flushed = batcher.push(session_id, payload);
// Should NOT auto-flush — we are below max_entries.
assert!(flushed.is_none(), "unexpected auto-flush");
}
// Explicit flush (simulates the 5 ms timer tick).
let frame = batcher.flush().expect("expected a frame with 3 entries");
assert_eq!(frame.len(), 3);
for entry in &frame.packets {
assert_eq!(entry.session_id, session_id);
}
}
/// Push exactly max_entries packets and verify the batcher auto-flushes
/// on the last push (simulating TrunkedForwarder.send triggering a send).
#[test]
fn trunked_forwarder_auto_flushes() {
let session_id: [u8; 2] = [0x00, 0x02];
let mut batcher = TrunkBatcher::new();
batcher.max_entries = 5;
batcher.max_bytes = 8192;
let pkt = make_test_packet(b"hello");
let mut auto_flushed: Option<wzp_proto::packet::TrunkFrame> = None;
for i in 0..5 {
let payload = pkt.to_bytes();
if let Some(frame) = batcher.push(session_id, payload) {
assert!(auto_flushed.is_none(), "should auto-flush exactly once");
auto_flushed = Some(frame);
// The auto-flush should happen on the 5th push (max_entries = 5).
assert_eq!(i, 4, "expected auto-flush on the last push");
}
}
let frame = auto_flushed.expect("batcher should have auto-flushed at max_entries");
assert_eq!(frame.len(), 5);
for entry in &frame.packets {
assert_eq!(entry.session_id, session_id);
}
// Batcher should now be empty — nothing to flush.
assert!(batcher.flush().is_none());
}
} }

View File

@@ -1,6 +1,7 @@
//! Session manager — tracks active call sessions on the relay. //! Session manager — tracks active call sessions on the relay.
use std::collections::HashMap; use std::collections::HashMap;
use std::time::Instant;
use wzp_proto::{QualityProfile, Session}; use wzp_proto::{QualityProfile, Session};
@@ -9,6 +10,26 @@ use crate::pipeline::{PipelineConfig, RelayPipeline};
/// Unique identifier for a relay session. /// Unique identifier for a relay session.
pub type SessionId = [u8; 16]; pub type SessionId = [u8; 16];
/// Lifecycle state of a concurrent session.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum SessionState {
Active,
Closing,
}
/// Lightweight metadata for a concurrent session (room-mode tracking).
#[derive(Debug, Clone)]
pub struct SessionInfo {
/// Which room this session belongs to.
pub room_name: String,
/// Client fingerprint (present when auth is enabled).
pub fingerprint: Option<String>,
/// When the session was created.
pub connected_at: Instant,
/// Current lifecycle state.
pub state: SessionState,
}
/// A single active call session on the relay. /// A single active call session on the relay.
pub struct RelaySession { pub struct RelaySession {
/// Protocol session state machine. /// Protocol session state machine.
@@ -47,8 +68,14 @@ impl RelaySession {
} }
/// Manages all active sessions on a relay. /// Manages all active sessions on a relay.
///
/// Combines two layers of tracking:
/// - `sessions`: heavy `RelaySession` objects (pipeline state machines, used in forward mode)
/// - `tracked`: lightweight `SessionInfo` entries (room + fingerprint, used in room mode to
/// enforce `max_sessions` and answer lifecycle queries)
pub struct SessionManager { pub struct SessionManager {
sessions: HashMap<SessionId, RelaySession>, sessions: HashMap<SessionId, RelaySession>,
tracked: HashMap<SessionId, SessionInfo>,
max_sessions: usize, max_sessions: usize,
} }
@@ -56,17 +83,20 @@ impl SessionManager {
pub fn new(max_sessions: usize) -> Self { pub fn new(max_sessions: usize) -> Self {
Self { Self {
sessions: HashMap::new(), sessions: HashMap::new(),
tracked: HashMap::new(),
max_sessions, max_sessions,
} }
} }
/// Create a new session. Returns None if at capacity. // ── Heavy session API (forward-mode pipelines) ──────────────────────
pub fn create_session(
/// Create a new pipeline session. Returns None if at capacity.
pub fn create_pipeline_session(
&mut self, &mut self,
session_id: SessionId, session_id: SessionId,
config: PipelineConfig, config: PipelineConfig,
) -> Option<&mut RelaySession> { ) -> Option<&mut RelaySession> {
if self.sessions.len() >= self.max_sessions { if self.total_count() >= self.max_sessions {
return None; return None;
} }
self.sessions self.sessions
@@ -75,53 +105,124 @@ impl SessionManager {
self.sessions.get_mut(&session_id) self.sessions.get_mut(&session_id)
} }
/// Get a session by ID. /// Get a pipeline session by ID.
pub fn get_session(&mut self, id: &SessionId) -> Option<&mut RelaySession> { pub fn get_session(&mut self, id: &SessionId) -> Option<&mut RelaySession> {
self.sessions.get_mut(id) self.sessions.get_mut(id)
} }
/// Remove a session. /// Remove a pipeline session.
pub fn remove_session(&mut self, id: &SessionId) -> Option<RelaySession> { pub fn remove_pipeline_session(&mut self, id: &SessionId) -> Option<RelaySession> {
self.sessions.remove(id) self.sessions.remove(id)
} }
/// Number of active sessions. /// Number of active pipeline sessions.
pub fn active_count(&self) -> usize { pub fn pipeline_active_count(&self) -> usize {
self.sessions.values().filter(|s| s.is_active()).count() self.sessions.values().filter(|s| s.is_active()).count()
} }
/// Total sessions (including inactive/closing). /// Total pipeline sessions (including inactive/closing).
pub fn total_count(&self) -> usize { pub fn pipeline_total_count(&self) -> usize {
self.sessions.len() self.sessions.len()
} }
/// Remove sessions idle for longer than `timeout_ms`. /// Remove pipeline sessions idle for longer than `timeout_ms`.
pub fn expire_idle(&mut self, now_ms: u64, timeout_ms: u64) -> usize { pub fn expire_idle(&mut self, now_ms: u64, timeout_ms: u64) -> usize {
let before = self.sessions.len(); let before = self.sessions.len();
self.sessions self.sessions
.retain(|_, s| now_ms.saturating_sub(s.last_activity_ms) < timeout_ms); .retain(|_, s| now_ms.saturating_sub(s.last_activity_ms) < timeout_ms);
before - self.sessions.len() before - self.sessions.len()
} }
// ── Lightweight concurrent-session API (room mode) ──────────────────
/// Register a new concurrent session.
/// Returns the `SessionId` on success, or an error string if `max_sessions` is exceeded.
pub fn create_session(
&mut self,
room: &str,
fingerprint: Option<String>,
) -> Result<SessionId, String> {
if self.total_count() >= self.max_sessions {
return Err(format!(
"max sessions ({}) exceeded",
self.max_sessions
));
}
let id = rand_session_id();
self.tracked.insert(id, SessionInfo {
room_name: room.to_string(),
fingerprint,
connected_at: Instant::now(),
state: SessionState::Active,
});
Ok(id)
}
/// Remove a tracked session.
pub fn remove_session(&mut self, id: SessionId) {
self.tracked.remove(&id);
}
/// Number of currently tracked (room-mode) sessions.
pub fn active_count(&self) -> usize {
self.tracked.values().filter(|s| s.state == SessionState::Active).count()
}
/// Return all session IDs that belong to a given room.
pub fn sessions_in_room(&self, room: &str) -> Vec<SessionId> {
self.tracked
.iter()
.filter(|(_, info)| info.room_name == room)
.map(|(id, _)| *id)
.collect()
}
/// Get metadata for a tracked session.
pub fn session_info(&self, id: SessionId) -> Option<&SessionInfo> {
self.tracked.get(&id)
}
/// Total sessions across both tracking layers.
pub fn total_count(&self) -> usize {
self.sessions.len() + self.tracked.len()
}
}
/// Generate a random 16-byte session identifier.
fn rand_session_id() -> SessionId {
let mut id = [0u8; 16];
// Use a simple monotonic + random source to avoid pulling in `rand` crate.
// Hash the instant + a counter for uniqueness.
use std::sync::atomic::{AtomicU64, Ordering};
static CTR: AtomicU64 = AtomicU64::new(1);
let ctr = CTR.fetch_add(1, Ordering::Relaxed);
let bytes = ctr.to_le_bytes();
id[..8].copy_from_slice(&bytes);
// Mix in some time-based entropy for the upper half.
let t = Instant::now().elapsed().as_nanos() as u64;
id[8..16].copy_from_slice(&t.to_le_bytes());
id
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
// ── Pipeline session tests (pre-existing, adapted to renamed API) ───
#[test] #[test]
fn create_and_get_session() { fn create_and_get_pipeline_session() {
let mut mgr = SessionManager::new(10); let mut mgr = SessionManager::new(10);
let id = [1u8; 16]; let id = [1u8; 16];
mgr.create_session(id, PipelineConfig::default()); mgr.create_pipeline_session(id, PipelineConfig::default());
assert_eq!(mgr.total_count(), 1);
assert!(mgr.get_session(&id).is_some()); assert!(mgr.get_session(&id).is_some());
} }
#[test] #[test]
fn respects_max_sessions() { fn respects_max_pipeline_sessions() {
let mut mgr = SessionManager::new(1); let mut mgr = SessionManager::new(1);
mgr.create_session([1u8; 16], PipelineConfig::default()); mgr.create_pipeline_session([1u8; 16], PipelineConfig::default());
let result = mgr.create_session([2u8; 16], PipelineConfig::default()); let result = mgr.create_pipeline_session([2u8; 16], PipelineConfig::default());
assert!(result.is_none()); assert!(result.is_none());
} }
@@ -129,10 +230,73 @@ mod tests {
fn expire_idle_removes_old() { fn expire_idle_removes_old() {
let mut mgr = SessionManager::new(10); let mut mgr = SessionManager::new(10);
let id = [1u8; 16]; let id = [1u8; 16];
mgr.create_session(id, PipelineConfig::default()); mgr.create_pipeline_session(id, PipelineConfig::default());
// Session has last_activity_ms = 0, current time = 60000, timeout = 30000
let expired = mgr.expire_idle(60_000, 30_000); let expired = mgr.expire_idle(60_000, 30_000);
assert_eq!(expired, 1); assert_eq!(expired, 1);
assert_eq!(mgr.total_count(), 0); assert_eq!(mgr.pipeline_total_count(), 0);
}
// ── Concurrent session (room-mode) tests ────────────────────────────
#[test]
fn create_and_remove() {
let mut mgr = SessionManager::new(10);
let id = mgr.create_session("room-a", Some("fp123".into())).unwrap();
assert_eq!(mgr.active_count(), 1);
mgr.remove_session(id);
assert_eq!(mgr.active_count(), 0);
}
#[test]
fn max_sessions_enforced() {
let mut mgr = SessionManager::new(2);
mgr.create_session("r1", None).unwrap();
mgr.create_session("r2", None).unwrap();
let err = mgr.create_session("r3", None);
assert!(err.is_err());
assert!(err.unwrap_err().contains("max sessions"));
}
#[test]
fn sessions_in_room_tracking() {
let mut mgr = SessionManager::new(10);
let a1 = mgr.create_session("alpha", None).unwrap();
let _a2 = mgr.create_session("alpha", None).unwrap();
let _b1 = mgr.create_session("beta", None).unwrap();
let alpha_ids = mgr.sessions_in_room("alpha");
assert_eq!(alpha_ids.len(), 2);
assert!(alpha_ids.contains(&a1));
let beta_ids = mgr.sessions_in_room("beta");
assert_eq!(beta_ids.len(), 1);
let empty = mgr.sessions_in_room("gamma");
assert!(empty.is_empty());
}
#[test]
fn session_info_returns_correct_data() {
let mut mgr = SessionManager::new(10);
let id = mgr.create_session("room-x", Some("alice-fp".into())).unwrap();
let info = mgr.session_info(id).expect("session should exist");
assert_eq!(info.room_name, "room-x");
assert_eq!(info.fingerprint.as_deref(), Some("alice-fp"));
assert_eq!(info.state, SessionState::Active);
// Non-existent session returns None
assert!(mgr.session_info([0xFFu8; 16]).is_none());
}
#[test]
fn max_sessions_shared_across_both_layers() {
let mut mgr = SessionManager::new(2);
// One pipeline session + one tracked session = 2 = at capacity
mgr.create_pipeline_session([1u8; 16], PipelineConfig::default());
mgr.create_session("room", None).unwrap();
// Both layers should now reject
assert!(mgr.create_session("room", None).is_err());
assert!(mgr.create_pipeline_session([2u8; 16], PipelineConfig::default()).is_none());
} }
} }

View File

@@ -0,0 +1,152 @@
//! Trunk batching — accumulates media packets from multiple sessions into
//! [`TrunkFrame`]s that fit inside a single QUIC datagram.
use std::time::Duration;
use bytes::Bytes;
use wzp_proto::packet::{TrunkEntry, TrunkFrame};
/// Batches individual session packets into [`TrunkFrame`]s.
///
/// A trunk frame is flushed when any of the following thresholds are hit:
/// - `max_entries` — maximum number of packets per trunk.
/// - `max_bytes` — maximum total wire size (should fit one UDP datagram).
///
/// The caller is responsible for timer-based flushing using [`flush_interval`]
/// and calling [`flush`] when the interval expires.
pub struct TrunkBatcher {
pending: TrunkFrame,
/// Current accumulated wire size of the pending frame.
pending_bytes: usize,
/// Maximum packets per trunk (default 10).
pub max_entries: usize,
/// Maximum total wire bytes per trunk (default 1200, fits in one UDP datagram).
pub max_bytes: usize,
/// Maximum wait before flushing (default 5 ms). Used by the caller for timer scheduling.
pub flush_interval: Duration,
}
impl TrunkBatcher {
/// Header size: the 2-byte count prefix present in every TrunkFrame.
const FRAME_HEADER: usize = 2;
pub fn new() -> Self {
Self {
pending: TrunkFrame::new(),
pending_bytes: Self::FRAME_HEADER,
max_entries: 10,
max_bytes: 1200,
flush_interval: Duration::from_millis(5),
}
}
/// Push a session packet. Returns `Some(frame)` if the batch is now full
/// and was flushed, `None` if more room remains.
pub fn push(&mut self, session_id: [u8; 2], payload: Bytes) -> Option<TrunkFrame> {
let entry_wire = TrunkEntry::OVERHEAD + payload.len();
// If adding this entry would exceed limits, flush first.
if self.should_flush_with(entry_wire) && !self.pending.is_empty() {
let frame = self.take_pending();
// Then start a new batch with this entry.
self.pending.push(session_id, payload);
self.pending_bytes += entry_wire;
return Some(frame);
}
self.pending.push(session_id, payload);
self.pending_bytes += entry_wire;
if self.should_flush() {
Some(self.take_pending())
} else {
None
}
}
/// Flush the current pending frame if non-empty.
pub fn flush(&mut self) -> Option<TrunkFrame> {
if self.pending.is_empty() {
None
} else {
Some(self.take_pending())
}
}
/// Returns `true` if the pending batch has reached `max_entries` or `max_bytes`.
pub fn should_flush(&self) -> bool {
self.pending.len() >= self.max_entries || self.pending_bytes >= self.max_bytes
}
// --- private helpers ---
/// Would adding `extra_bytes` exceed a threshold?
fn should_flush_with(&self, extra_bytes: usize) -> bool {
self.pending.len() + 1 > self.max_entries
|| self.pending_bytes + extra_bytes > self.max_bytes
}
/// Take the pending frame out, resetting state.
fn take_pending(&mut self) -> TrunkFrame {
let frame = std::mem::replace(&mut self.pending, TrunkFrame::new());
self.pending_bytes = Self::FRAME_HEADER;
frame
}
}
impl Default for TrunkBatcher {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn trunk_batcher_fills_and_flushes() {
let mut batcher = TrunkBatcher::new();
batcher.max_entries = 3;
batcher.max_bytes = 4096; // large enough to not interfere
// First two pushes should not flush.
assert!(batcher.push([0, 1], Bytes::from_static(b"aaa")).is_none());
assert!(batcher.push([0, 2], Bytes::from_static(b"bbb")).is_none());
// Third push should trigger flush (max_entries = 3).
let frame = batcher
.push([0, 3], Bytes::from_static(b"ccc"))
.expect("should flush at max_entries");
assert_eq!(frame.len(), 3);
assert_eq!(frame.packets[0].session_id, [0, 1]);
assert_eq!(frame.packets[2].payload, Bytes::from_static(b"ccc"));
// Batcher is now empty.
assert!(batcher.flush().is_none());
}
#[test]
fn trunk_batcher_respects_max_bytes() {
let mut batcher = TrunkBatcher::new();
batcher.max_entries = 100; // won't be the trigger
// Frame header (2) + one entry overhead (4) + 50 payload = 56
// Two entries: 2 + 2*(4+50) = 110
// Three entries: 2 + 3*54 = 164
batcher.max_bytes = 120; // allow at most 2 entries of 50-byte payload
let big = Bytes::from(vec![0xAA; 50]);
assert!(batcher.push([0, 1], big.clone()).is_none()); // 56 bytes
// Second push: 56 + 54 = 110 < 120, fits
assert!(batcher.push([0, 2], big.clone()).is_none());
// Third push would be 164 > 120, so existing batch flushes first
let frame = batcher
.push([0, 3], big.clone())
.expect("should flush on max_bytes");
assert_eq!(frame.len(), 2);
// The third entry is now pending
let remaining = batcher.flush().unwrap();
assert_eq!(remaining.len(), 1);
assert_eq!(remaining.packets[0].session_id, [0, 3]);
}
}

View File

@@ -0,0 +1,295 @@
//! WZP-S-5 integration tests: crypto handshake wired into live QUIC path.
//!
//! Verifies that `perform_handshake` (client/caller) and `accept_handshake`
//! (relay/callee) complete successfully over a real in-process QUIC connection
//! and produce usable `CryptoSession` values.
use std::net::{Ipv4Addr, SocketAddr};
use std::sync::Arc;
use wzp_client::perform_handshake;
use wzp_crypto::{KeyExchange, WarzoneKeyExchange};
use wzp_proto::{MediaTransport, SignalMessage};
use wzp_relay::handshake::accept_handshake;
use wzp_transport::{client_config, create_endpoint, server_config, QuinnTransport};
/// Establish a QUIC connection and wrap both sides in `QuinnTransport`.
///
/// Returns (client_transport, server_transport, _endpoints) where the endpoint
/// tuple must be kept alive for the duration of the test to avoid premature
/// connection teardown.
async fn connected_pair() -> (Arc<QuinnTransport>, Arc<QuinnTransport>, (quinn::Endpoint, quinn::Endpoint)) {
let _ = rustls::crypto::ring::default_provider().install_default();
let (sc, _cert_der) = server_config();
let server_addr: SocketAddr = (Ipv4Addr::LOCALHOST, 0).into();
let server_ep = create_endpoint(server_addr, Some(sc)).expect("server endpoint");
let server_listen = server_ep.local_addr().expect("server local addr");
let client_addr: SocketAddr = (Ipv4Addr::LOCALHOST, 0).into();
let client_ep = create_endpoint(client_addr, None).expect("client endpoint");
let server_ep_clone = server_ep.clone();
let accept_fut = tokio::spawn(async move {
let conn = wzp_transport::accept(&server_ep_clone).await.expect("accept");
Arc::new(QuinnTransport::new(conn))
});
let client_conn =
wzp_transport::connect(&client_ep, server_listen, "localhost", client_config())
.await
.expect("connect");
let client_transport = Arc::new(QuinnTransport::new(client_conn));
let server_transport = accept_fut.await.expect("join accept task");
(client_transport, server_transport, (server_ep, client_ep))
}
// -----------------------------------------------------------------------
// Test 1: handshake_succeeds
// -----------------------------------------------------------------------
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn handshake_succeeds() {
let (client_transport, server_transport, _endpoints) = connected_pair().await;
let caller_seed: [u8; 32] = [0xAA; 32];
let callee_seed: [u8; 32] = [0xBB; 32];
// Clone Arc so the server transport stays alive in the main task too.
let server_t = Arc::clone(&server_transport);
let callee_handle = tokio::spawn(async move {
accept_handshake(server_t.as_ref(), &callee_seed).await
});
let caller_session = perform_handshake(client_transport.as_ref(), &caller_seed)
.await
.expect("perform_handshake should succeed");
let (callee_session, chosen_profile) = callee_handle
.await
.expect("join callee task")
.expect("accept_handshake should succeed");
// Both sides should have derived a working CryptoSession.
// Verify by encrypting on one side and decrypting on the other.
let header = b"test-header";
let plaintext = b"hello warzone";
let mut ciphertext = Vec::new();
let mut caller_session = caller_session;
let mut callee_session = callee_session;
caller_session
.encrypt(header, plaintext, &mut ciphertext)
.expect("encrypt");
let mut decrypted = Vec::new();
callee_session
.decrypt(header, &ciphertext, &mut decrypted)
.expect("decrypt");
assert_eq!(&decrypted, plaintext);
assert_eq!(chosen_profile, wzp_proto::QualityProfile::GOOD);
// Keep transports alive until test completes.
drop(server_transport);
drop(client_transport);
}
// -----------------------------------------------------------------------
// Test 2: handshake_verifies_identity
// -----------------------------------------------------------------------
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn handshake_verifies_identity() {
let (client_transport, server_transport, _endpoints) = connected_pair().await;
// Two completely different seeds => different identity keys.
let caller_seed: [u8; 32] = [0x11; 32];
let callee_seed: [u8; 32] = [0x22; 32];
// Confirm the seeds produce different identity public keys.
let caller_kx = WarzoneKeyExchange::from_identity_seed(&caller_seed);
let callee_kx = WarzoneKeyExchange::from_identity_seed(&callee_seed);
assert_ne!(
caller_kx.identity_public_key(),
callee_kx.identity_public_key(),
"different seeds must produce different identity keys"
);
let server_t = Arc::clone(&server_transport);
let callee_handle = tokio::spawn(async move {
accept_handshake(server_t.as_ref(), &callee_seed).await
});
let caller_session = perform_handshake(client_transport.as_ref(), &caller_seed)
.await
.expect("handshake must succeed even with different identities");
let (callee_session, _profile) = callee_handle
.await
.expect("join")
.expect("accept_handshake must succeed");
// Cross-encrypt/decrypt to prove the shared session works.
let header = b"id-test";
let plaintext = b"identity verified";
let mut ct = Vec::new();
let mut caller_session = caller_session;
let mut callee_session = callee_session;
caller_session
.encrypt(header, plaintext, &mut ct)
.expect("encrypt");
let mut pt = Vec::new();
callee_session
.decrypt(header, &ct, &mut pt)
.expect("decrypt");
assert_eq!(&pt, plaintext);
drop(server_transport);
drop(client_transport);
}
// -----------------------------------------------------------------------
// Test 3: auth_then_handshake
// -----------------------------------------------------------------------
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auth_then_handshake() {
let (client_transport, server_transport, _endpoints) = connected_pair().await;
let caller_seed: [u8; 32] = [0xCC; 32];
let callee_seed: [u8; 32] = [0xDD; 32];
// The callee side: first consume the AuthToken, then run accept_handshake.
let server_t = Arc::clone(&server_transport);
let callee_handle = tokio::spawn(async move {
// 1. Receive AuthToken
let auth_msg = server_t
.recv_signal()
.await
.expect("recv_signal should succeed")
.expect("should receive a message");
let token = match auth_msg {
SignalMessage::AuthToken { token } => token,
other => panic!("expected AuthToken, got {:?}", std::mem::discriminant(&other)),
};
// 2. Run the cryptographic handshake
let (session, profile) = accept_handshake(server_t.as_ref(), &callee_seed)
.await
.expect("accept_handshake after auth");
(token, session, profile)
});
// Caller side: send AuthToken first, then perform_handshake.
let auth = SignalMessage::AuthToken {
token: "bearer-test-token-12345".to_string(),
};
client_transport
.send_signal(&auth)
.await
.expect("send AuthToken");
let caller_session = perform_handshake(client_transport.as_ref(), &caller_seed)
.await
.expect("perform_handshake after auth");
let (received_token, callee_session, _profile) = callee_handle
.await
.expect("join callee task");
// Verify the auth token was received correctly.
assert_eq!(received_token, "bearer-test-token-12345");
// Verify the crypto session works after the auth preamble.
let header = b"auth-hdr";
let plaintext = b"post-auth payload";
let mut ct = Vec::new();
let mut caller_session = caller_session;
let mut callee_session = callee_session;
caller_session
.encrypt(header, plaintext, &mut ct)
.expect("encrypt");
let mut pt = Vec::new();
callee_session
.decrypt(header, &ct, &mut pt)
.expect("decrypt");
assert_eq!(&pt, plaintext);
drop(server_transport);
drop(client_transport);
}
// -----------------------------------------------------------------------
// Test 4: handshake_rejects_bad_signature
// -----------------------------------------------------------------------
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn handshake_rejects_bad_signature() {
let (client_transport, server_transport, _endpoints) = connected_pair().await;
let caller_seed: [u8; 32] = [0xEE; 32];
let callee_seed: [u8; 32] = [0xFF; 32];
// Spawn callee -- it should reject the tampered CallOffer.
let server_t = Arc::clone(&server_transport);
let callee_handle = tokio::spawn(async move {
accept_handshake(server_t.as_ref(), &callee_seed).await
});
// Manually build a CallOffer with a corrupted signature.
let mut kx = WarzoneKeyExchange::from_identity_seed(&caller_seed);
let identity_pub = kx.identity_public_key();
let ephemeral_pub = kx.generate_ephemeral();
let mut sign_data = Vec::with_capacity(32 + 10);
sign_data.extend_from_slice(&ephemeral_pub);
sign_data.extend_from_slice(b"call-offer");
let mut signature = kx.sign(&sign_data);
// Tamper: flip bits in the signature.
for byte in signature.iter_mut().take(8) {
*byte ^= 0xFF;
}
let bad_offer = SignalMessage::CallOffer {
identity_pub,
ephemeral_pub,
signature,
supported_profiles: vec![wzp_proto::QualityProfile::GOOD],
};
client_transport
.send_signal(&bad_offer)
.await
.expect("send tampered CallOffer");
// The callee should return an error about signature verification.
let result = callee_handle.await.expect("join callee task");
match result {
Ok(_) => panic!("accept_handshake must reject a bad signature"),
Err(e) => {
let err_msg = e.to_string();
assert!(
err_msg.contains("signature verification failed"),
"error should mention signature verification, got: {err_msg}"
);
}
}
drop(server_transport);
drop(client_transport);
}

View File

@@ -6,6 +6,7 @@
use async_trait::async_trait; use async_trait::async_trait;
use std::sync::Mutex; use std::sync::Mutex;
use wzp_proto::packet::TrunkFrame;
use wzp_proto::{MediaPacket, MediaTransport, PathQuality, SignalMessage, TransportError}; use wzp_proto::{MediaPacket, MediaTransport, PathQuality, SignalMessage, TransportError};
use crate::datagram; use crate::datagram;
@@ -36,6 +37,47 @@ impl QuinnTransport {
pub fn max_datagram_size(&self) -> Option<usize> { pub fn max_datagram_size(&self) -> Option<usize> {
datagram::max_datagram_payload(&self.connection) datagram::max_datagram_payload(&self.connection)
} }
/// Send an encoded [`TrunkFrame`] as a single QUIC datagram.
pub fn send_trunk(&self, frame: &TrunkFrame) -> Result<(), TransportError> {
let data = frame.encode();
if let Some(max_size) = self.connection.max_datagram_size() {
if data.len() > max_size {
return Err(TransportError::DatagramTooLarge {
size: data.len(),
max: max_size,
});
}
}
self.connection.send_datagram(data).map_err(|e| {
TransportError::Internal(format!("send trunk datagram error: {e}"))
})?;
Ok(())
}
/// Receive a single QUIC datagram and decode it as a [`TrunkFrame`].
///
/// Returns `Ok(None)` on connection close, `Ok(Some(frame))` on success,
/// or an error on malformed data / transport failure.
pub async fn recv_trunk(&self) -> Result<Option<TrunkFrame>, TransportError> {
let data = match self.connection.read_datagram().await {
Ok(data) => data,
Err(quinn::ConnectionError::ApplicationClosed(_)) => return Ok(None),
Err(quinn::ConnectionError::LocallyClosed) => return Ok(None),
Err(e) => {
return Err(TransportError::Internal(format!(
"recv trunk datagram error: {e}"
)))
}
};
TrunkFrame::decode(&data)
.map(Some)
.ok_or_else(|| TransportError::Internal("malformed trunk frame".into()))
}
} }
#[async_trait] #[async_trait]

View File

@@ -18,6 +18,9 @@ tracing = { workspace = true }
tracing-subscriber = { workspace = true } tracing-subscriber = { workspace = true }
bytes = { workspace = true } bytes = { workspace = true }
anyhow = "1" anyhow = "1"
wzp-relay = { path = "../wzp-relay" }
serde_json = "1"
rustls-pemfile = "2"
axum = { version = "0.8", features = ["ws"] } axum = { version = "0.8", features = ["ws"] }
tower-http = { version = "0.6", features = ["fs"] } tower-http = { version = "0.6", features = ["fs"] }
futures = "0.3" futures = "0.3"
@@ -26,6 +29,7 @@ rcgen = "0.13"
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] } rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
rustls-pki-types = "1" rustls-pki-types = "1"
tokio-rustls = "0.26" tokio-rustls = "0.26"
prometheus = "0.13"
[[bin]] [[bin]]
name = "wzp-web" name = "wzp-web"

View File

@@ -7,7 +7,6 @@
//! //!
//! Rooms: clients connect to /ws/<room-name> and are paired by room. //! Rooms: clients connect to /ws/<room-name> and are paired by room.
use std::collections::HashMap;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
@@ -25,20 +24,16 @@ use tracing::{error, info, warn};
use wzp_client::call::{CallConfig, CallDecoder, CallEncoder}; use wzp_client::call::{CallConfig, CallDecoder, CallEncoder};
use wzp_proto::MediaTransport; use wzp_proto::MediaTransport;
mod metrics;
use metrics::WebMetrics;
const FRAME_SAMPLES: usize = 960; const FRAME_SAMPLES: usize = 960;
#[derive(Clone)] #[derive(Clone)]
struct AppState { struct AppState {
relay_addr: SocketAddr, relay_addr: SocketAddr,
rooms: Arc<Mutex<HashMap<String, RoomSlot>>>, auth_url: Option<String>,
} metrics: WebMetrics,
/// A waiting client in a room.
struct RoomSlot {
/// Sender half — send audio TO this waiting client's browser.
tx: tokio::sync::mpsc::Sender<Vec<u8>>,
/// Receiver half — receive audio FROM this waiting client's browser.
rx: Arc<Mutex<tokio::sync::mpsc::Receiver<Vec<i16>>>>,
} }
#[tokio::main] #[tokio::main]
@@ -51,6 +46,9 @@ async fn main() -> anyhow::Result<()> {
let mut port: u16 = 8080; let mut port: u16 = 8080;
let mut relay_addr: SocketAddr = "127.0.0.1:4433".parse()?; let mut relay_addr: SocketAddr = "127.0.0.1:4433".parse()?;
let mut use_tls = false; let mut use_tls = false;
let mut auth_url: Option<String> = None;
let mut cert_path: Option<String> = None;
let mut key_path: Option<String> = None;
let args: Vec<String> = std::env::args().collect(); let args: Vec<String> = std::env::args().collect();
let mut i = 1; let mut i = 1;
@@ -59,16 +57,22 @@ async fn main() -> anyhow::Result<()> {
"--port" => { i += 1; port = args[i].parse().expect("invalid port"); } "--port" => { i += 1; port = args[i].parse().expect("invalid port"); }
"--relay" => { i += 1; relay_addr = args[i].parse().expect("invalid relay address"); } "--relay" => { i += 1; relay_addr = args[i].parse().expect("invalid relay address"); }
"--tls" => { use_tls = true; } "--tls" => { use_tls = true; }
"--auth-url" => { i += 1; auth_url = Some(args[i].clone()); }
"--cert" => { i += 1; cert_path = Some(args[i].clone()); }
"--key" => { i += 1; key_path = Some(args[i].clone()); }
"--help" | "-h" => { "--help" | "-h" => {
eprintln!("Usage: wzp-web [--port 8080] [--relay 127.0.0.1:4433] [--tls]"); eprintln!("Usage: wzp-web [--port 8080] [--relay 127.0.0.1:4433] [--tls] [--auth-url <url>]");
eprintln!(); eprintln!();
eprintln!("Options:"); eprintln!("Options:");
eprintln!(" --port <port> HTTP/WebSocket port (default: 8080)"); eprintln!(" --port <port> HTTP/WebSocket port (default: 8080)");
eprintln!(" --relay <addr> WZP relay address (default: 127.0.0.1:4433)"); eprintln!(" --relay <addr> WZP relay address (default: 127.0.0.1:4433)");
eprintln!(" --tls Enable HTTPS (required for mic on Android)"); eprintln!(" --tls Enable HTTPS (required for mic on Android)");
eprintln!(" --auth-url <url> featherChat auth endpoint for token validation");
eprintln!(" --cert <path> TLS certificate PEM file (optional, overrides self-signed)");
eprintln!(" --key <path> TLS private key PEM file (optional, overrides self-signed)");
eprintln!(); eprintln!();
eprintln!("Rooms: open https://host:port/<room-name> to join a room."); eprintln!("Rooms: open https://host:port/<room-name> to join a room.");
eprintln!("Two clients in the same room are connected for a call."); eprintln!("Browser sends auth JSON as first WS message when --auth-url is set.");
std::process::exit(0); std::process::exit(0);
} }
_ => {} _ => {}
@@ -76,9 +80,15 @@ async fn main() -> anyhow::Result<()> {
i += 1; i += 1;
} }
if let Some(ref url) = auth_url {
info!(url, "auth enabled — browsers must send token as first WS message");
}
let web_metrics = WebMetrics::new();
let state = AppState { let state = AppState {
relay_addr, relay_addr,
rooms: Arc::new(Mutex::new(HashMap::new())), auth_url,
metrics: web_metrics,
}; };
let static_dir = if std::path::Path::new("crates/wzp-web/static").exists() { let static_dir = if std::path::Path::new("crates/wzp-web/static").exists() {
@@ -89,20 +99,44 @@ async fn main() -> anyhow::Result<()> {
"static" "static"
}; };
// Serve index.html for any path that isn't /ws/, /metrics, or a static file.
// This lets URLs like /manwe load the SPA which reads the room from the path.
let static_service = ServeDir::new(static_dir)
.fallback(tower_http::services::ServeFile::new(
format!("{}/index.html", static_dir),
));
let app = Router::new() let app = Router::new()
.route("/ws/{room}", get(ws_handler)) .route("/ws/{room}", get(ws_handler))
.fallback_service(ServeDir::new(static_dir)) .route("/metrics", get(metrics::metrics_handler))
.fallback_service(static_service)
.with_state(state); .with_state(state);
let listen: SocketAddr = format!("0.0.0.0:{port}").parse()?; let listen: SocketAddr = format!("0.0.0.0:{port}").parse()?;
if use_tls { if use_tls {
let cert_key = rcgen::generate_simple_self_signed(vec![ let (cert_der, key_der) = if let (Some(cp), Some(kp)) = (&cert_path, &key_path) {
"localhost".to_string(), "wzp".to_string(), // Load real certificates from files
])?; info!(cert = %cp, key = %kp, "loading TLS certificates from files");
let cert_der = rustls_pki_types::CertificateDer::from(cert_key.cert); let cert_pem = std::fs::read(cp)?;
let key_der = rustls_pki_types::PrivateKeyDer::try_from(cert_key.key_pair.serialize_der()) let key_pem = std::fs::read(kp)?;
.map_err(|e| anyhow::anyhow!("key error: {e}"))?; let cert = rustls_pemfile::certs(&mut &cert_pem[..])
.next()
.ok_or_else(|| anyhow::anyhow!("no certificate found in PEM"))??;
let key = rustls_pemfile::private_key(&mut &key_pem[..])?
.ok_or_else(|| anyhow::anyhow!("no private key found in PEM"))?;
(cert, key)
} else {
// Generate self-signed for development
info!("generating self-signed TLS certificate (use --cert/--key for production)");
let cert_key = rcgen::generate_simple_self_signed(vec![
"localhost".to_string(), "wzp".to_string(),
])?;
let cert = rustls_pki_types::CertificateDer::from(cert_key.cert);
let key = rustls_pki_types::PrivateKeyDer::try_from(cert_key.key_pair.serialize_der())
.map_err(|e| anyhow::anyhow!("key error: {e}"))?;
(cert, key)
};
let mut tls_config = rustls::ServerConfig::builder() let mut tls_config = rustls::ServerConfig::builder()
.with_no_client_auth() .with_no_client_auth()
@@ -141,6 +175,59 @@ async fn ws_handler(
async fn handle_ws(socket: WebSocket, room: String, state: AppState) { async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
info!(room = %room, "client joined room"); info!(room = %room, "client joined room");
state.metrics.active_connections.inc();
let (mut ws_sender, mut ws_receiver) = socket.split();
// Auth: if --auth-url is set, expect a JSON auth message from the browser first
let browser_token: Option<String> = if state.auth_url.is_some() {
info!(room = %room, "waiting for auth token from browser...");
match ws_receiver.next().await {
Some(Ok(Message::Text(text))) => {
match serde_json::from_str::<serde_json::Value>(&text) {
Ok(v) if v.get("type").and_then(|t| t.as_str()) == Some("auth") => {
let token = v.get("token").and_then(|t| t.as_str()).unwrap_or("").to_string();
if token.is_empty() {
error!(room = %room, "empty auth token");
state.metrics.auth_failures.inc();
state.metrics.active_connections.dec();
return;
}
// Validate against featherChat
if let Some(ref url) = state.auth_url {
match wzp_relay::auth::validate_token(url, &token).await {
Ok(client) => {
info!(room = %room, fingerprint = %client.fingerprint, "browser authenticated");
}
Err(e) => {
error!(room = %room, "browser auth failed: {e}");
state.metrics.auth_failures.inc();
state.metrics.active_connections.dec();
return;
}
}
}
Some(token)
}
_ => {
error!(room = %room, "expected auth JSON, got: {text}");
state.metrics.auth_failures.inc();
state.metrics.active_connections.dec();
return;
}
}
}
_ => {
error!(room = %room, "no auth message from browser");
state.metrics.auth_failures.inc();
state.metrics.active_connections.dec();
return;
}
}
} else {
None
};
// Connect to relay // Connect to relay
let relay_addr = state.relay_addr; let relay_addr = state.relay_addr;
let bind_addr: SocketAddr = if relay_addr.is_ipv6() { let bind_addr: SocketAddr = if relay_addr.is_ipv6() {
@@ -155,10 +242,14 @@ async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
Err(e) => { error!("create endpoint: {e}"); return; } Err(e) => { error!("create endpoint: {e}"); return; }
}; };
// Pass room name as QUIC SNI so the relay knows which room to join // Hash room name for SNI privacy
let sni = if room.is_empty() { "default" } else { &room }; let sni = if room.is_empty() {
"default".to_string()
} else {
wzp_crypto::hash_room_name(&room)
};
let connection = let connection =
match wzp_transport::connect(&endpoint, relay_addr, sni, client_config).await { match wzp_transport::connect(&endpoint, relay_addr, &sni, client_config).await {
Ok(c) => c, Ok(c) => c,
Err(e) => { error!("connect to relay: {e}"); return; } Err(e) => { error!("connect to relay: {e}"); return; }
}; };
@@ -166,9 +257,44 @@ async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
info!(room = %room, "connected to relay"); info!(room = %room, "connected to relay");
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection)); let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
let config = CallConfig::default();
let (mut ws_sender, mut ws_receiver) = socket.split(); // Send auth token to relay (if auth is enabled)
if let Some(ref token) = browser_token {
let auth = wzp_proto::SignalMessage::AuthToken {
token: token.clone(),
};
if let Err(e) = transport.send_signal(&auth).await {
error!(room = %room, "send auth to relay: {e}");
return;
}
}
// Crypto handshake with relay
let handshake_start = std::time::Instant::now();
let bridge_seed = wzp_crypto::Seed::generate();
match wzp_client::handshake::perform_handshake(&*transport, &bridge_seed.0).await {
Ok(_session) => {
let elapsed = handshake_start.elapsed().as_secs_f64();
state.metrics.handshake_latency.observe(elapsed);
info!(room = %room, elapsed_ms = %(elapsed * 1000.0), "crypto handshake with relay complete");
}
Err(e) => {
error!(room = %room, "relay handshake failed: {e}");
transport.close().await.ok();
state.metrics.active_connections.dec();
return;
}
}
// Web bridge config: low latency for PTT, disable silence suppression
// (PTT handles silence at the browser level, no need to suppress here)
let config = CallConfig {
suppression_enabled: false,
jitter_target: 3, // 60ms instead of default (~1s)
jitter_max: 20, // 400ms cap
jitter_min: 1, // start playing after 20ms
..CallConfig::default()
};
let encoder = Arc::new(Mutex::new(CallEncoder::new(&config))); let encoder = Arc::new(Mutex::new(CallEncoder::new(&config)));
let decoder = Arc::new(Mutex::new(CallDecoder::new(&config))); let decoder = Arc::new(Mutex::new(CallDecoder::new(&config)));
@@ -176,6 +302,7 @@ async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
let send_transport = transport.clone(); let send_transport = transport.clone();
let send_encoder = encoder.clone(); let send_encoder = encoder.clone();
let send_room = room.clone(); let send_room = room.clone();
let send_metrics = state.metrics.clone();
let send_task = tokio::spawn(async move { let send_task = tokio::spawn(async move {
let mut frames_sent = 0u64; let mut frames_sent = 0u64;
while let Some(Ok(msg)) = ws_receiver.next().await { while let Some(Ok(msg)) = ws_receiver.next().await {
@@ -201,6 +328,7 @@ async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
return; return;
} }
} }
send_metrics.frames_bridged.with_label_values(&["up"]).inc();
frames_sent += 1; frames_sent += 1;
if frames_sent % 500 == 0 { if frames_sent % 500 == 0 {
info!(room = %send_room, frames_sent, "browser → relay"); info!(room = %send_room, frames_sent, "browser → relay");
@@ -217,6 +345,7 @@ async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
let recv_transport = transport.clone(); let recv_transport = transport.clone();
let recv_decoder = decoder.clone(); let recv_decoder = decoder.clone();
let recv_room = room.clone(); let recv_room = room.clone();
let recv_metrics = state.metrics.clone();
let recv_task = tokio::spawn(async move { let recv_task = tokio::spawn(async move {
let mut pcm_buf = vec![0i16; FRAME_SAMPLES]; let mut pcm_buf = vec![0i16; FRAME_SAMPLES];
let mut frames_recv = 0u64; let mut frames_recv = 0u64;
@@ -235,6 +364,7 @@ async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
error!("ws send: {e}"); error!("ws send: {e}");
return; return;
} }
recv_metrics.frames_bridged.with_label_values(&["down"]).inc();
frames_recv += 1; frames_recv += 1;
if frames_recv % 500 == 0 { if frames_recv % 500 == 0 {
info!(room = %recv_room, frames_recv, "relay → browser"); info!(room = %recv_room, frames_recv, "relay → browser");
@@ -255,5 +385,6 @@ async fn handle_ws(socket: WebSocket, room: String, state: AppState) {
} }
transport.close().await.ok(); transport.close().await.ok();
state.metrics.active_connections.dec();
info!(room = %room, "session ended"); info!(room = %room, "session ended");
} }

View File

@@ -0,0 +1,130 @@
//! Prometheus metrics for the WZP web bridge.
use prometheus::{
Encoder, Histogram, HistogramOpts, IntCounter, IntCounterVec, IntGauge, Opts, Registry,
TextEncoder,
};
/// Holds all Prometheus metrics for the web bridge.
#[derive(Clone)]
pub struct WebMetrics {
pub active_connections: IntGauge,
pub frames_bridged: IntCounterVec,
pub auth_failures: IntCounter,
pub handshake_latency: Histogram,
registry: Registry,
}
impl WebMetrics {
/// Create and register all web bridge metrics.
pub fn new() -> Self {
let registry = Registry::new();
let active_connections = IntGauge::with_opts(
Opts::new("wzp_web_active_connections", "Current WebSocket connections"),
)
.expect("metric");
registry
.register(Box::new(active_connections.clone()))
.expect("register");
let frames_bridged = IntCounterVec::new(
Opts::new("wzp_web_frames_bridged_total", "Audio frames bridged"),
&["direction"],
)
.expect("metric");
registry
.register(Box::new(frames_bridged.clone()))
.expect("register");
let auth_failures = IntCounter::with_opts(
Opts::new("wzp_web_auth_failures_total", "Browser auth failures"),
)
.expect("metric");
registry
.register(Box::new(auth_failures.clone()))
.expect("register");
let handshake_latency = Histogram::with_opts(
HistogramOpts::new(
"wzp_web_handshake_latency_seconds",
"Relay handshake time",
)
.buckets(vec![0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0]),
)
.expect("metric");
registry
.register(Box::new(handshake_latency.clone()))
.expect("register");
Self {
active_connections,
frames_bridged,
auth_failures,
handshake_latency,
registry,
}
}
/// Encode all metrics as Prometheus text exposition format.
pub fn gather(&self) -> String {
let encoder = TextEncoder::new();
let metric_families = self.registry.gather();
let mut buf = Vec::new();
encoder.encode(&metric_families, &mut buf).unwrap();
String::from_utf8(buf).unwrap()
}
}
/// Axum handler that returns Prometheus text metrics.
pub async fn metrics_handler(
axum::extract::State(state): axum::extract::State<super::AppState>,
) -> String {
state.metrics.gather()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn web_metrics_register() {
let m = WebMetrics::new();
// Touch CounterVec labels so they appear in output
m.frames_bridged.with_label_values(&["up"]);
m.frames_bridged.with_label_values(&["down"]);
let output = m.gather();
assert!(
output.contains("wzp_web_active_connections"),
"missing active_connections"
);
assert!(
output.contains("wzp_web_frames_bridged_total"),
"missing frames_bridged"
);
assert!(
output.contains("wzp_web_auth_failures_total"),
"missing auth_failures"
);
assert!(
output.contains("wzp_web_handshake_latency_seconds"),
"missing handshake_latency"
);
}
#[test]
fn web_metrics_track_connections() {
let m = WebMetrics::new();
assert_eq!(m.active_connections.get(), 0);
m.active_connections.inc();
m.active_connections.inc();
assert_eq!(m.active_connections.get(), 2);
m.active_connections.dec();
assert_eq!(m.active_connections.get(), 1);
let output = m.gather();
assert!(output.contains("wzp_web_active_connections 1"));
}
}

View File

@@ -1,34 +1,51 @@
// AudioWorklet processor for capturing microphone audio. // WarzonePhone AudioWorklet processors.
// Accumulates samples and posts 960-sample (20ms @ 48kHz) frames to the main thread. // Both capture and playback handle 960-sample frames (20ms @ 48kHz).
// AudioWorklet calls process() with 128-sample blocks, so we buffer internally.
class CaptureProcessor extends AudioWorkletProcessor { const FRAME_SIZE = 960;
class WZPCaptureProcessor extends AudioWorkletProcessor {
constructor() { constructor() {
super(); super();
this.buffer = new Float32Array(0); // Pre-allocate ring buffer large enough for several frames
this._ring = new Float32Array(FRAME_SIZE * 4);
this._writePos = 0;
} }
process(inputs, outputs, parameters) { process(inputs, _outputs, _parameters) {
const input = inputs[0]; const input = inputs[0];
if (!input || !input[0]) return true; if (!input || !input[0]) return true;
const samples = input[0]; // Float32Array, typically 128 samples const samples = input[0]; // Float32Array, 128 samples typically
const len = samples.length;
// Accumulate // Write into ring buffer
const newBuf = new Float32Array(this.buffer.length + samples.length); if (this._writePos + len > this._ring.length) {
newBuf.set(this.buffer); // Should not happen with FRAME_SIZE * 4 capacity and timely draining,
newBuf.set(samples, this.buffer.length); // but handle gracefully by resizing
this.buffer = newBuf; const bigger = new Float32Array(this._ring.length * 2);
bigger.set(this._ring.subarray(0, this._writePos));
this._ring = bigger;
}
this._ring.set(samples, this._writePos);
this._writePos += len;
// Send complete 960-sample frames // Drain complete 960-sample frames
while (this.buffer.length >= 960) { while (this._writePos >= FRAME_SIZE) {
const frame = this.buffer.slice(0, 960); // Convert Float32 -> Int16 PCM
this.buffer = this.buffer.slice(960); const pcm = new Int16Array(FRAME_SIZE);
for (let i = 0; i < FRAME_SIZE; i++) {
// Convert to Int16 const s = this._ring[i];
const pcm = new Int16Array(960); pcm[i] = s < -1 ? -32768 : s > 1 ? 32767 : (s * 32767) | 0;
for (let i = 0; i < 960; i++) {
pcm[i] = Math.max(-32768, Math.min(32767, Math.round(frame[i] * 32767)));
} }
// Shift remaining data forward
this._writePos -= FRAME_SIZE;
if (this._writePos > 0) {
this._ring.copyWithin(0, FRAME_SIZE, FRAME_SIZE + this._writePos);
}
// Send the Int16 PCM buffer (1920 bytes) to the main thread
this.port.postMessage(pcm.buffer, [pcm.buffer]); this.port.postMessage(pcm.buffer, [pcm.buffer]);
} }
@@ -36,4 +53,90 @@ class CaptureProcessor extends AudioWorkletProcessor {
} }
} }
registerProcessor('capture-processor', CaptureProcessor); class WZPPlaybackProcessor extends AudioWorkletProcessor {
constructor() {
super();
// Ring buffer for decoded Float32 samples ready for output
this._ring = new Float32Array(FRAME_SIZE * 8);
this._readPos = 0;
this._writePos = 0;
this._maxBuffered = FRAME_SIZE * 6; // ~120ms max to prevent drift
this.port.onmessage = (e) => {
// Receive Int16 PCM from main thread, convert to Float32
const pcm = new Int16Array(e.data);
const len = pcm.length;
// Check capacity
let available = this._writePos - this._readPos;
if (available < 0) available += this._ring.length;
if (available + len > this._maxBuffered) {
// Too much buffered; drop oldest samples to prevent drift
this._readPos = this._writePos;
}
// Ensure ring buffer is big enough
if (this._ring.length < len + available + 128) {
const bigger = new Float32Array(this._ring.length * 2);
// Copy existing data contiguously
if (this._readPos <= this._writePos) {
bigger.set(this._ring.subarray(this._readPos, this._writePos));
} else {
const firstPart = this._ring.subarray(this._readPos);
const secondPart = this._ring.subarray(0, this._writePos);
bigger.set(firstPart);
bigger.set(secondPart, firstPart.length);
}
this._ring = bigger;
const count = available;
this._readPos = 0;
this._writePos = count;
}
// Write converted samples into ring buffer linearly (simpler: use linear buffer)
for (let i = 0; i < len; i++) {
this._ring[this._writePos] = pcm[i] / 32768.0;
this._writePos++;
if (this._writePos >= this._ring.length) this._writePos = 0;
}
};
}
process(_inputs, outputs, _parameters) {
const output = outputs[0];
if (!output || !output[0]) return true;
const out = output[0]; // 128 samples typically
const needed = out.length;
let available;
if (this._writePos >= this._readPos) {
available = this._writePos - this._readPos;
} else {
available = this._ring.length - this._readPos + this._writePos;
}
if (available >= needed) {
for (let i = 0; i < needed; i++) {
out[i] = this._ring[this._readPos];
this._readPos++;
if (this._readPos >= this._ring.length) this._readPos = 0;
}
} else {
// Output what we have, zero-fill the rest (underrun)
for (let i = 0; i < available; i++) {
out[i] = this._ring[this._readPos];
this._readPos++;
if (this._readPos >= this._ring.length) this._readPos = 0;
}
for (let i = available; i < needed; i++) {
out[i] = 0;
}
}
return true;
}
}
registerProcessor('wzp-capture-processor', WZPCaptureProcessor);
registerProcessor('wzp-playback-processor', WZPPlaybackProcessor);

View File

@@ -66,7 +66,6 @@ let statsInterval = null;
// Use room from URL path or input field // Use room from URL path or input field
function getRoom() { function getRoom() {
// Check URL: /roomname or /#roomname
const path = location.pathname.replace(/^\//, '').replace(/\/$/, ''); const path = location.pathname.replace(/^\//, '').replace(/\/$/, '');
if (path && path !== 'index.html') return path; if (path && path !== 'index.html') return path;
const hash = location.hash.replace('#', ''); const hash = location.hash.replace('#', '');
@@ -74,6 +73,14 @@ function getRoom() {
return document.getElementById('room').value.trim() || 'default'; return document.getElementById('room').value.trim() || 'default';
} }
// Pre-fill room input from URL on page load
(function() {
const path = location.pathname.replace(/^\//, '').replace(/\/$/, '');
if (path && path !== 'index.html') {
document.getElementById('room').value = path;
}
})();
function setStatus(msg) { document.getElementById('status').textContent = msg; } function setStatus(msg) { document.getElementById('status').textContent = msg; }
function setStats(msg) { document.getElementById('stats').textContent = msg; } function setStats(msg) { document.getElementById('stats').textContent = msg; }
@@ -165,16 +172,34 @@ function stopCall() {
function cleanupAudio() { function cleanupAudio() {
if (captureNode) { captureNode.disconnect(); captureNode = null; } if (captureNode) { captureNode.disconnect(); captureNode = null; }
if (playbackNode) { playbackNode.disconnect(); playbackNode = null; } if (playbackNode) { playbackNode.disconnect(); playbackNode = null; }
if (audioCtx) { audioCtx.close(); audioCtx = null; } if (audioCtx) { audioCtx.close(); audioCtx = null; workletLoaded = false; }
if (mediaStream) { mediaStream.getTracks().forEach(t => t.stop()); mediaStream = null; } if (mediaStream) { mediaStream.getTracks().forEach(t => t.stop()); mediaStream = null; }
} }
let workletLoaded = false;
async function loadWorkletModule() {
if (workletLoaded) return true;
if (typeof AudioWorkletNode === 'undefined' || !audioCtx.audioWorklet) {
console.warn('AudioWorklet API not supported in this browser — using ScriptProcessorNode fallback');
return false;
}
try {
await audioCtx.audioWorklet.addModule('audio-processor.js');
workletLoaded = true;
return true;
} catch(e) {
console.warn('AudioWorklet module failed to load — using ScriptProcessorNode fallback:', e);
return false;
}
}
async function startAudioCapture() { async function startAudioCapture() {
const source = audioCtx.createMediaStreamSource(mediaStream); const source = audioCtx.createMediaStreamSource(mediaStream);
const hasWorklet = await loadWorkletModule();
try { if (hasWorklet) {
await audioCtx.audioWorklet.addModule('audio-processor.js'); captureNode = new AudioWorkletNode(audioCtx, 'wzp-capture-processor');
captureNode = new AudioWorkletNode(audioCtx, 'capture-processor');
captureNode.port.onmessage = (e) => { captureNode.port.onmessage = (e) => {
if (!active || !ws || ws.readyState !== WebSocket.OPEN || !transmitting) return; if (!active || !ws || ws.readyState !== WebSocket.OPEN || !transmitting) return;
ws.send(e.data); ws.send(e.data);
@@ -188,10 +213,10 @@ async function startAudioCapture() {
}; };
source.connect(captureNode); source.connect(captureNode);
captureNode.connect(audioCtx.destination); // needed to keep worklet alive captureNode.connect(audioCtx.destination); // needed to keep worklet alive
} catch(e) { } else {
// Fallback to ScriptProcessor if AudioWorklet not supported // Fallback to ScriptProcessorNode (deprecated but widely supported)
console.warn('AudioWorklet not available, using ScriptProcessor fallback:', e); console.warn('Capture: using ScriptProcessorNode fallback');
captureNode = audioCtx.createScriptProcessor(1024, 1, 1); captureNode = audioCtx.createScriptProcessor(4096, 1, 1);
let acc = new Float32Array(0); let acc = new Float32Array(0);
captureNode.onaudioprocess = (ev) => { captureNode.onaudioprocess = (ev) => {
if (!active || !ws || ws.readyState !== WebSocket.OPEN || !transmitting) return; if (!active || !ws || ws.readyState !== WebSocket.OPEN || !transmitting) return;
@@ -215,13 +240,14 @@ async function startAudioCapture() {
} }
async function startAudioPlayback() { async function startAudioPlayback() {
try { const hasWorklet = await loadWorkletModule();
await audioCtx.audioWorklet.addModule('playback-processor.js');
playbackNode = new AudioWorkletNode(audioCtx, 'playback-processor'); if (hasWorklet) {
playbackNode = new AudioWorkletNode(audioCtx, 'wzp-playback-processor');
playbackNode.connect(audioCtx.destination); playbackNode.connect(audioCtx.destination);
} catch(e) { } else {
console.warn('AudioWorklet playback not available, using scheduled fallback'); console.warn('Playback: using scheduled BufferSource fallback');
playbackNode = null; // will use createBufferSource fallback playbackNode = null; // will use createBufferSource fallback in playAudio()
} }
} }
@@ -230,16 +256,15 @@ let nextPlayTime = 0;
function playAudio(pcmInt16) { function playAudio(pcmInt16) {
if (!audioCtx) return; if (!audioCtx) return;
const floatData = new Float32Array(pcmInt16.length);
for (let i = 0; i < pcmInt16.length; i++) {
floatData[i] = pcmInt16[i] / 32768.0;
}
if (playbackNode && playbackNode.port) { if (playbackNode && playbackNode.port) {
// AudioWorklet path — send float samples to the worklet // AudioWorklet path — send Int16 PCM directly to the worklet for conversion
playbackNode.port.postMessage(floatData.buffer, [floatData.buffer]); playbackNode.port.postMessage(pcmInt16.buffer, [pcmInt16.buffer]);
} else { } else {
// Fallback: scheduled BufferSource // Fallback: scheduled BufferSource (convert Int16 -> Float32 on main thread)
const floatData = new Float32Array(pcmInt16.length);
for (let i = 0; i < pcmInt16.length; i++) {
floatData[i] = pcmInt16[i] / 32768.0;
}
const buffer = audioCtx.createBuffer(1, floatData.length, SAMPLE_RATE); const buffer = audioCtx.createBuffer(1, floatData.length, SAMPLE_RATE);
buffer.getChannelData(0).set(floatData); buffer.getChannelData(0).set(floatData);
const source = audioCtx.createBufferSource(); const source = audioCtx.createBufferSource();

View File

@@ -1,45 +0,0 @@
// AudioWorklet processor for playing received audio.
// Receives PCM samples from the main thread and outputs them.
class PlaybackProcessor extends AudioWorkletProcessor {
constructor() {
super();
this.buffer = new Float32Array(0);
this.maxBuffered = 48000 / 5; // 200ms max
this.port.onmessage = (e) => {
const incoming = new Float32Array(e.data);
// Append
const newBuf = new Float32Array(this.buffer.length + incoming.length);
newBuf.set(this.buffer);
newBuf.set(incoming, this.buffer.length);
this.buffer = newBuf;
// Cap buffer to prevent drift
if (this.buffer.length > this.maxBuffered) {
this.buffer = this.buffer.slice(this.buffer.length - this.maxBuffered);
}
};
}
process(inputs, outputs, parameters) {
const output = outputs[0];
if (!output || !output[0]) return true;
const out = output[0]; // 128 samples typically
if (this.buffer.length >= out.length) {
out.set(this.buffer.subarray(0, out.length));
this.buffer = this.buffer.slice(out.length);
} else if (this.buffer.length > 0) {
out.set(this.buffer);
for (let i = this.buffer.length; i < out.length; i++) out[i] = 0;
this.buffer = new Float32Array(0);
} else {
for (let i = 0; i < out.length; i++) out[i] = 0;
}
return true;
}
}
registerProcessor('playback-processor', PlaybackProcessor);

1
deps/featherchat vendored Submodule

Submodule deps/featherchat added at 4a4fa9fab4

91
docs/INTEGRATION_TASKS.md Normal file
View File

@@ -0,0 +1,91 @@
# WZP Integration Tasks
Based on featherChat commit 65f6390 — FUTURE_TASKS.md with WZP integration items.
## Status Key
- DONE = implemented and tested
- PARTIAL = code exists but not wired into live path
- TODO = not started
---
## WZP-Side Tasks (our responsibility)
### WZP-S-1. HKDF Salt/Info String Alignment — DONE
- Both use `None` salt, info strings `warzone-ed25519` / `warzone-x25519`
- 15 cross-project tests verify identical output
### WZP-S-2. Accept featherChat Bearer Token on Relay — DONE
- `--auth-url` flag on relay
- Clients send `SignalMessage::AuthToken` as first signal
- Relay calls `POST {auth_url}` to validate, rejects if invalid
- Commit: `ad16ddb`
### WZP-S-3. Signaling Bridge Mode — DONE
- `featherchat.rs` module: encode/decode WZP SignalMessage into FC CallSignal.payload
- `WzpCallPayload` wraps signal + relay_addr + room
- Commit: `ad16ddb`
### WZP-S-4. Room Access Control — DONE
- `hash_room_name()` in wzp-crypto: SHA-256("featherchat-group:" + name)[:16] → 32 hex chars
- CLI `--room <name>` hashes before using as SNI
- Web bridge hashes room name before connecting to relay
- RoomManager gains ACL: `with_acl()`, `allow()`, `is_authorized()`
- `join()` now returns `Result<ParticipantId, String>`, rejects unauthorized
- Relay passes authenticated fingerprint to room join
### WZP-S-5. Wire Crypto Handshake into Live Path — DONE
- CLI: `perform_handshake()` called after connect, before any media mode
- Relay: `accept_handshake()` called after auth, before room join
- Web bridge: `perform_handshake()` called after auth token, before audio loops
- Relay generates ephemeral identity seed at startup, logs fingerprint
- Quality profile negotiated during handshake
### WZP-S-6. Web Bridge + featherChat Web Client — DONE
- `--auth-url` flag on web bridge
- Browser sends `{ "type": "auth", "token": "..." }` as first WS message
- Web bridge validates token against featherChat, then passes to relay
- `--cert`/`--key` flags for production TLS certificates
### WZP-S-7. Publish wzp-proto for featherChat — DONE
- `wzp-proto/Cargo.toml` now standalone (no workspace inheritance)
- featherChat can use: `wzp-proto = { git = "ssh://...", path = "crates/wzp-proto" }`
### WZP-S-8. CLI Seed Input — DONE
- `--seed <hex>` and `--mnemonic <24 words>` flags
- featherChat-compatible identity: same seed → same keys
- Commit: `12cdfe6`
### WZP-S-9. Fix Hardcoded Assumptions — DONE
1. No auth on relay — ✅ fixed via S-2 (`--auth-url`)
2. Room names from SNI — ✅ fixed via S-4 (hashed room names)
3. No signaling before media — ✅ fixed via S-5 (mandatory handshake)
4. Self-signed TLS — ✅ fixed via S-6 (`--cert`/`--key` for production)
5. No codec negotiation in web bridge — ✅ profile negotiated in handshake
6. No connection to FC key registry — ✅ fixed via S-2 (token validation)
---
## featherChat-Side Tasks (their responsibility, we support)
### WZP-FC-1. Add CallSignal WireMessage variant — DONE (v0.0.21, 064a730)
### WZP-FC-2. Call state management + sled tree — TODO (1-2d)
### WZP-FC-3. WS handler for call signaling — TODO (0.5d)
### WZP-FC-4. Auth token validation endpoint — DONE (v0.0.21, 064a730)
### WZP-FC-5. Group-to-room mapping — TODO (1d)
### WZP-FC-6. Presence/online status API — TODO (0.5-2d)
### WZP-FC-7. Missed call notifications — TODO (0.5d)
### WZP-FC-8. Cross-project identity verification — DONE (15 tests, 26dc848)
### WZP-FC-9. HKDF salt investigation — DONE (no mismatch)
### WZP-FC-10. Web bridge shared auth — TODO (1-2d)
### FC-CRATE-1. Standalone warzone-protocol — DONE (v0.0.21, 4a4fa9f)
---
## All WZP-S Tasks Complete
The WZP side of integration is finished. featherChat needs:
1. **FC-2 + FC-3** — call state management + WS routing (makes real calls possible)
2. **FC-5** — group-to-room mapping (uses `hash_room_name` convention)
3. **FC-6/7** — presence + missed calls (UX polish)
4. **FC-10** — web bridge shared auth (browser token flow)

158
docs/TELEMETRY.md Normal file
View File

@@ -0,0 +1,158 @@
# WZP Telemetry & Observability
## Overview
WarzonePhone exports Prometheus-compatible metrics from all services (relay, web bridge, client) for Grafana dashboards. Inter-relay health probes provide always-on monitoring with negligible bandwidth overhead via multiplexed test lines.
## Architecture
```
┌──────────┐ probe (1 pkt/s) ┌──────────┐
│ Relay A │◄─────────────────────►│ Relay B │
│ :4433 │ │ :4433 │
│ /metrics │ │ /metrics │
└────┬─────┘ └────┬─────┘
│ │
│ scrape │ scrape
▼ ▼
┌─────────────────────────────────────────────┐
│ Prometheus │
└─────────────────┬───────────────────────────┘
┌─────────────────────────────────────────────┐
│ Grafana │
│ ┌─────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ Relay │ │ Per-call │ │ Inter-relay │ │
│ │ Health │ │ Quality │ │ Latency Map │ │
│ └─────────┘ └──────────┘ └──────────────┘ │
└─────────────────────────────────────────────┘
```
## Metrics Exported
### Relay (`/metrics` on HTTP port, default :9090)
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `wzp_relay_active_sessions` | Gauge | — | Current active sessions |
| `wzp_relay_active_rooms` | Gauge | — | Current active rooms |
| `wzp_relay_packets_forwarded_total` | Counter | `room` | Total packets forwarded |
| `wzp_relay_bytes_forwarded_total` | Counter | `room` | Total bytes forwarded |
| `wzp_relay_auth_attempts_total` | Counter | `result` (ok/fail) | Auth validation attempts |
| `wzp_relay_handshake_duration_seconds` | Histogram | — | Crypto handshake time |
| `wzp_relay_session_jitter_buffer_depth` | Gauge | `session_id` | Buffer depth per session |
| `wzp_relay_session_loss_pct` | Gauge | `session_id` | Packet loss percentage |
| `wzp_relay_session_rtt_ms` | Gauge | `session_id` | Round-trip time |
| `wzp_relay_session_underruns_total` | Counter | `session_id` | Jitter buffer underruns |
| `wzp_relay_session_overruns_total` | Counter | `session_id` | Jitter buffer overruns |
### Web Bridge (`/metrics` on same HTTP port)
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `wzp_web_active_connections` | Gauge | — | Current WebSocket connections |
| `wzp_web_frames_bridged_total` | Counter | `direction` (up/down) | Audio frames bridged |
| `wzp_web_auth_failures_total` | Counter | — | Browser auth failures |
| `wzp_web_handshake_latency_seconds` | Histogram | — | Relay handshake time |
### Inter-Relay Probes
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `wzp_probe_rtt_ms` | Gauge | `target` | RTT to peer relay |
| `wzp_probe_loss_pct` | Gauge | `target` | Loss to peer relay |
| `wzp_probe_jitter_ms` | Gauge | `target` | Jitter to peer relay |
| `wzp_probe_up` | Gauge | `target` | 1 if reachable, 0 if not |
### Client (JSONL file)
When `--metrics-file <path>` is used, the client writes one JSON object per second:
```json
{
"ts": "2026-03-28T06:30:00Z",
"buffer_depth": 45,
"underruns": 0,
"overruns": 0,
"loss_pct": 1.2,
"rtt_ms": 34,
"jitter_ms": 8,
"frames_sent": 50,
"frames_received": 49,
"quality_profile": "GOOD"
}
```
## Task Breakdown
### WZP-P2-T5: Telemetry & Observability
| ID | Task | Dependencies | Effort |
|----|------|-------------|--------|
| **S1** | Prometheus `/metrics` on relay | None | 2-3h |
| **S2** | Per-session metrics (jitter, loss, RTT) | S1 | 2-3h |
| **S3** | Prometheus `/metrics` on web bridge | None | 2h |
| **S4** | Client `--metrics-file` JSONL export | None | 2h |
| **S5** | Inter-relay health probe (`--probe`) | S1 | 4-6h |
| **S6** | Probe mesh mode (all relays probe each other) | S5 | 2-3h |
| **S7** | Grafana dashboard JSON | S1-S6 | 2h |
### Parallelization
- **Group A** (parallel): S1, S3, S4 — three different binaries, no file overlap
- **Group B** (sequential): S2 after S1, then S5 → S6
- **Last**: S7 after all metrics are defined
## Inter-Relay Health Probes
The probe is a multiplexed test line: one QUIC connection per peer relay, one silent media packet per second (~50 bytes/s). This provides:
- **Continuous RTT measurement**: Ping/Pong signals timed to <1ms precision
- **Loss detection**: Sequence gaps tracked over sliding 60s window
- **Jitter monitoring**: Variation in inter-packet arrival times
- **Outage detection**: `wzp_probe_up` drops to 0 within seconds
### Why multiplexed?
WZP already multiplexes media on a single QUIC connection. The probe session shares the same connection pool — no extra ports, no extra TLS handshakes. At 1 pkt/s of silence (~50 bytes after Opus encoding + headers), the overhead is negligible even on metered links.
### Probe mesh example
With 3 relays (A, B, C), each probes the other 2:
```
A → B: rtt=12ms loss=0.0% jitter=2ms
A → C: rtt=45ms loss=0.1% jitter=5ms
B → A: rtt=13ms loss=0.0% jitter=2ms
B → C: rtt=38ms loss=0.0% jitter=4ms
C → A: rtt=44ms loss=0.2% jitter=6ms
C → B: rtt=37ms loss=0.0% jitter=3ms
```
This matrix feeds the Grafana latency heatmap and triggers alerts on degradation.
## Usage
```bash
# Relay with metrics
wzp-relay --listen 0.0.0.0:4433 --metrics-port 9090
# Relay with metrics + probe peer
wzp-relay --listen 0.0.0.0:4433 --metrics-port 9090 --probe relay-b:4433
# Web bridge with metrics
wzp-web --port 8080 --relay 127.0.0.1:4433 --metrics-port 9091
# Client with JSONL telemetry
wzp-client --live --metrics-file /tmp/call-metrics.jsonl relay:4433
```
## Grafana Dashboard
The pre-built dashboard (`docs/grafana-dashboard.json`) includes:
1. **Relay Health** — active sessions, rooms, packets/s, bytes/s
2. **Call Quality** — per-session jitter depth, loss%, RTT, underruns over time
3. **Inter-Relay Mesh** — latency heatmap, probe status, loss trends
4. **Web Bridge** — active connections, frames bridged, auth failures

View File

@@ -0,0 +1,230 @@
# Shared Crate Strategy: WZP ↔ featherChat
**Goal:** Both projects import each other's crates directly instead of duplicating code. A change to identity derivation in featherChat automatically applies in WZP, and vice versa for call signaling types.
---
## Current Problem
- `warzone-protocol` uses workspace dependency inheritance (`Cargo.toml` has `ed25519-dalek.workspace = true`). When WZP tries to use it as a path dep, Cargo fails because it can't resolve workspace references from outside the featherChat workspace.
- WZP had to mirror featherChat's `identity.rs`, `mnemonic.rs`, and `Fingerprint` type in `wzp-crypto/src/identity.rs` — duplicate code that can drift.
- featherChat will need `wzp_proto::SignalMessage` for the `WireMessage::CallSignal` variant — another potential duplication.
## Solution: Make Key Crates Standalone-Importable
### What featherChat Needs to Do
#### FC-CRATE-1: Make `warzone-protocol` standalone-publishable
**File:** `warzone/crates/warzone-protocol/Cargo.toml`
Replace all `workspace = true` references with explicit versions:
```toml
# Before:
ed25519-dalek.workspace = true
x25519-dalek.workspace = true
# After:
ed25519-dalek = { version = "2", features = ["serde", "rand_core"] }
x25519-dalek = { version = "2", features = ["serde", "static_secrets"] }
chacha20poly1305 = "0.10"
hkdf = "0.12"
sha2 = "0.10"
rand = "0.8"
bip39 = "2"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
bincode = "1"
thiserror = "2"
hex = "0.4"
base64 = "0.22"
uuid = { version = "1", features = ["v4"] }
zeroize = { version = "1", features = ["derive"] }
chrono = { version = "0.4", features = ["serde"] }
k256 = { version = "0.13", features = ["ecdsa", "serde"] }
tiny-keccak = { version = "2", features = ["keccak"] }
```
**Keep workspace inheritance working too** by using the `[package]` fallback pattern:
```toml
[package]
name = "warzone-protocol"
version = "0.0.20"
edition = "2021"
# Remove version.workspace and edition.workspace — use explicit values
```
This way the crate still works inside the featherChat workspace AND can be imported by WZP as a path dependency.
**Test:** From the WZP repo, this should work:
```toml
# In wzp-crypto/Cargo.toml:
warzone-protocol = { path = "../../deps/featherchat/warzone/crates/warzone-protocol" }
```
**Effort:** 30 minutes. Mechanical replacement, then `cargo build` to verify.
#### FC-CRATE-2: Add `wzp-proto` as a git dependency for `CallSignal`
**File:** `warzone/crates/warzone-protocol/Cargo.toml`
```toml
[dependencies]
# WarzonePhone signaling types (for CallSignal WireMessage variant)
wzp-proto = { git = "ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git", optional = true }
[features]
default = []
wzp = ["wzp-proto"]
```
**File:** `warzone/crates/warzone-protocol/src/message.rs`
```rust
#[derive(Serialize, Deserialize, Clone, Debug)]
pub enum WireMessage {
// ... existing variants ...
/// Voice/video call signaling (requires "wzp" feature).
#[cfg(feature = "wzp")]
CallSignal {
id: String,
sender_fingerprint: String,
signal: wzp_proto::SignalMessage, // Typed, not opaque bytes
},
/// Voice/video call signaling (without wzp feature — opaque bytes).
#[cfg(not(feature = "wzp"))]
CallSignal {
id: String,
sender_fingerprint: String,
signal: Vec<u8>, // Opaque JSON bytes
},
}
```
**Alternative (simpler):** Always use `Vec<u8>` for the signal field and let the consumer deserialize. This avoids the feature flag complexity:
```rust
CallSignal {
id: String,
sender_fingerprint: String,
signal_json: String, // JSON-serialized wzp_proto::SignalMessage
},
```
featherChat server treats it as opaque. WZP client deserializes it to `SignalMessage`.
**Effort:** 1-2 hours.
#### FC-CRATE-3: Extract shared identity types to a micro-crate (optional, long-term)
Create `warzone-identity` crate containing only:
- `Seed` (generation, from_bytes, from_hex, from_mnemonic, to_mnemonic)
- `IdentityKeyPair` (derive from seed)
- `PublicIdentity` (verifying key, encryption key, fingerprint)
- `Fingerprint` (SHA-256 truncated, display format)
- `hkdf_derive()` helper
Both `warzone-protocol` and `wzp-crypto` depend on `warzone-identity` instead of each implementing their own. This is the cleanest long-term solution but requires more refactoring.
**Crate structure:**
```
warzone-identity/
├── Cargo.toml (standalone, no workspace inheritance)
├── src/
│ ├── lib.rs
│ ├── seed.rs
│ ├── identity.rs
│ ├── fingerprint.rs
│ └── mnemonic.rs
```
**Dependencies:** ed25519-dalek, x25519-dalek, hkdf, sha2, bip39, hex, zeroize
Both projects import it:
```toml
# featherChat:
warzone-identity = { path = "../warzone-identity" }
# WZP (via submodule):
warzone-identity = { path = "deps/featherchat/warzone-identity" }
```
**Effort:** Half a day. Extract code from warzone-protocol, update imports in both projects.
---
### What WZP Needs to Do (after featherChat completes FC-CRATE-1)
#### WZP-CRATE-1: Replace identity mirror with real dependency
Once `warzone-protocol` is standalone-importable:
**File:** `crates/wzp-crypto/Cargo.toml`
```toml
# Remove bip39 and hex (now comes from warzone-protocol)
# Add:
warzone-protocol = { path = "../../deps/featherchat/warzone/crates/warzone-protocol" }
```
**File:** `crates/wzp-crypto/src/identity.rs`
Replace the entire file with re-exports:
```rust
//! featherChat identity — re-exported from warzone-protocol.
pub use warzone_protocol::identity::{IdentityKeyPair, Seed};
pub use warzone_protocol::types::Fingerprint;
```
**File:** `crates/wzp-crypto/src/handshake.rs`
Use `warzone_protocol::identity::Seed` internally instead of raw HKDF calls.
**Effort:** 1 hour (after FC-CRATE-1 is done).
#### WZP-CRATE-2: Make `wzp-proto` standalone-importable
`wzp-proto` already has explicit dependency versions (not workspace-inherited for external deps). It should work as a git dependency from featherChat. Verify:
```bash
# From a scratch project:
cargo add --git ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git wzp-proto
```
If this fails, replace any remaining workspace references in `wzp-proto/Cargo.toml` with explicit versions.
**Key types featherChat needs from wzp-proto:**
- `SignalMessage` (CallOffer, CallAnswer, IceCandidate, Hangup, etc.)
- `QualityProfile` (for codec negotiation)
- `HangupReason`
**Effort:** 30 minutes to verify and fix.
---
## Recommended Order
1. **FC-CRATE-1** — Make warzone-protocol standalone (30 min, unblocks everything)
2. **WZP-CRATE-2** — Verify wzp-proto works as git dep (30 min)
3. **FC-CRATE-2** — Add CallSignal with opaque signal_json field (1-2 hours)
4. **WZP-CRATE-1** — Replace identity mirror with real dep (1 hour)
5. **FC-CRATE-3** — Extract warzone-identity micro-crate (optional, half day)
After steps 1-4, both projects share types directly:
- WZP imports `warzone-protocol` for identity/seed/fingerprint
- featherChat imports `wzp-proto` (via git) for `SignalMessage` types
- No duplicated code, no drift risk
---
## Dependency Graph After Integration
```
warzone-identity (shared micro-crate, optional step 5)
↑ ↑
warzone-protocol wzp-crypto
↑ ↑
warzone-server wzp-proto ← wzp-codec, wzp-fec, wzp-transport
↑ ↑
warzone-client wzp-client, wzp-relay, wzp-web
```

885
docs/grafana-dashboard.json Normal file
View File

@@ -0,0 +1,885 @@
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "10.0.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "gauge",
"name": "Gauge",
"version": ""
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
},
{
"type": "panel",
"id": "barchart",
"name": "Bar chart",
"version": ""
},
{
"type": "panel",
"id": "histogram",
"name": "Histogram",
"version": ""
},
{
"type": "panel",
"id": "table",
"name": "Table",
"version": ""
},
{
"type": "panel",
"id": "stat",
"name": "Stat",
"version": ""
}
],
"id": null,
"uid": "wzp-relay-v1",
"title": "WarzonePhone Relay Dashboard",
"description": "Monitoring dashboard for WarzonePhone relay, call quality, inter-relay mesh, and web bridge.",
"tags": ["wzp", "voip", "relay"],
"style": "dark",
"timezone": "browser",
"editable": true,
"graphTooltip": 1,
"fiscalYearStartMonth": 0,
"liveNow": false,
"refresh": "10s",
"schemaVersion": 39,
"version": 1,
"time": {
"from": "now-1h",
"to": "now"
},
"templating": {
"list": []
},
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": { "type": "grafana", "uid": "-- Grafana --" },
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"panels": [
{
"type": "row",
"title": "Relay Health",
"collapsed": false,
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 0 },
"id": 1,
"panels": []
},
{
"type": "gauge",
"title": "Active Sessions",
"gridPos": { "h": 8, "w": 4, "x": 0, "y": 1 },
"id": 2,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_relay_active_sessions",
"legendFormat": "sessions",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "thresholds" },
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 50 },
{ "color": "red", "value": 100 }
]
},
"unit": "none",
"min": 0
},
"overrides": []
},
"options": {
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"showThresholdLabels": false,
"showThresholdMarkers": true
}
},
{
"type": "gauge",
"title": "Active Rooms",
"gridPos": { "h": 8, "w": 4, "x": 4, "y": 1 },
"id": 3,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_relay_active_rooms",
"legendFormat": "rooms",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "thresholds" },
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 25 },
{ "color": "red", "value": 50 }
]
},
"unit": "none",
"min": 0
},
"overrides": []
},
"options": {
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"showThresholdLabels": false,
"showThresholdMarkers": true
}
},
{
"type": "timeseries",
"title": "Packets/sec",
"gridPos": { "h": 8, "w": 4, "x": 8, "y": 1 },
"id": 4,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "rate(wzp_relay_packets_forwarded_total[1m])",
"legendFormat": "packets/s",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 20,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto",
"gradientMode": "scheme"
},
"unit": "pps",
"min": 0
},
"overrides": []
},
"options": {
"tooltip": { "mode": "single", "sort": "none" },
"legend": { "displayMode": "list", "placement": "bottom" }
}
},
{
"type": "timeseries",
"title": "Bytes/sec",
"gridPos": { "h": 8, "w": 4, "x": 12, "y": 1 },
"id": 5,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "rate(wzp_relay_bytes_forwarded_total[1m])",
"legendFormat": "bytes/s",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 20,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto",
"gradientMode": "scheme"
},
"unit": "Bps",
"min": 0
},
"overrides": []
},
"options": {
"tooltip": { "mode": "single", "sort": "none" },
"legend": { "displayMode": "list", "placement": "bottom" }
}
},
{
"type": "barchart",
"title": "Auth Success vs Failure",
"gridPos": { "h": 8, "w": 4, "x": 16, "y": 1 },
"id": 6,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "rate(wzp_relay_auth_attempts_total[5m])",
"legendFormat": "{{result}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"stacking": "normal",
"fillOpacity": 80,
"lineWidth": 1,
"gradientMode": "none",
"axisCenteredZero": false
},
"unit": "ops"
},
"overrides": [
{
"matcher": { "id": "byName", "options": "ok" },
"properties": [
{ "id": "color", "value": { "fixedColor": "green", "mode": "fixed" } }
]
},
{
"matcher": { "id": "byName", "options": "fail" },
"properties": [
{ "id": "color", "value": { "fixedColor": "red", "mode": "fixed" } }
]
}
]
},
"options": {
"tooltip": { "mode": "single", "sort": "none" },
"legend": { "displayMode": "list", "placement": "bottom" },
"orientation": "auto",
"barWidth": 0.9,
"groupWidth": 0.7,
"xTickLabelRotation": 0,
"showValue": "auto",
"stacking": "normal"
}
},
{
"type": "histogram",
"title": "Handshake Duration",
"gridPos": { "h": 8, "w": 4, "x": 20, "y": 1 },
"id": 7,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_relay_handshake_duration_seconds_bucket",
"legendFormat": "{{le}}",
"refId": "A",
"format": "heatmap"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"fillOpacity": 80,
"lineWidth": 1,
"gradientMode": "scheme"
},
"unit": "s"
},
"overrides": []
},
"options": {
"tooltip": { "mode": "single", "sort": "none" },
"legend": { "displayMode": "list", "placement": "bottom" },
"bucketOffset": 0,
"combine": false,
"fillOpacity": 80,
"gradientMode": "scheme"
}
},
{
"type": "row",
"title": "Call Quality (per-session)",
"collapsed": false,
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 9 },
"id": 10,
"panels": []
},
{
"type": "timeseries",
"title": "Buffer Depth",
"gridPos": { "h": 8, "w": 6, "x": 0, "y": 10 },
"id": 11,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_relay_session_jitter_buffer_depth",
"legendFormat": "{{session_id}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto"
},
"unit": "none",
"min": 0
},
"overrides": []
},
"options": {
"tooltip": { "mode": "multi", "sort": "desc" },
"legend": { "displayMode": "table", "placement": "bottom", "calcs": ["lastNotNull", "mean"] }
}
},
{
"type": "timeseries",
"title": "Loss %",
"gridPos": { "h": 8, "w": 6, "x": 6, "y": 10 },
"id": 12,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_relay_session_loss_pct",
"legendFormat": "{{session_id}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto"
},
"unit": "percent",
"min": 0,
"max": 100,
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 2 },
{ "color": "red", "value": 5 }
]
}
},
"overrides": []
},
"options": {
"tooltip": { "mode": "multi", "sort": "desc" },
"legend": { "displayMode": "table", "placement": "bottom", "calcs": ["lastNotNull", "mean", "max"] }
}
},
{
"type": "timeseries",
"title": "RTT",
"gridPos": { "h": 8, "w": 6, "x": 12, "y": 10 },
"id": 13,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_relay_session_rtt_ms",
"legendFormat": "{{session_id}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto"
},
"unit": "ms",
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 100 },
{ "color": "red", "value": 300 }
]
}
},
"overrides": []
},
"options": {
"tooltip": { "mode": "multi", "sort": "desc" },
"legend": { "displayMode": "table", "placement": "bottom", "calcs": ["lastNotNull", "mean", "max"] }
}
},
{
"type": "timeseries",
"title": "Underruns",
"gridPos": { "h": 8, "w": 6, "x": 18, "y": 10 },
"id": 14,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "rate(wzp_relay_session_underruns_total[1m])",
"legendFormat": "{{session_id}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto"
},
"unit": "ops",
"min": 0
},
"overrides": []
},
"options": {
"tooltip": { "mode": "multi", "sort": "desc" },
"legend": { "displayMode": "table", "placement": "bottom", "calcs": ["lastNotNull", "mean"] }
}
},
{
"type": "row",
"title": "Inter-Relay Mesh",
"collapsed": false,
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 18 },
"id": 20,
"panels": []
},
{
"type": "table",
"title": "RTT Heatmap",
"gridPos": { "h": 8, "w": 6, "x": 0, "y": 19 },
"id": 21,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_probe_rtt_ms",
"legendFormat": "{{target}}",
"refId": "A",
"instant": true,
"format": "table"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "thresholds" },
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 50 },
{ "color": "orange", "value": 100 },
{ "color": "red", "value": 200 }
]
},
"unit": "ms",
"custom": {
"displayMode": "color-background",
"align": "auto",
"inspect": false
}
},
"overrides": []
},
"options": {
"showHeader": true,
"sortBy": [{ "displayName": "Value", "desc": true }],
"cellHeight": "sm",
"footer": { "show": false }
},
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": { "Time": true, "__name__": true, "instance": true, "job": true },
"renameByName": { "target": "Target", "Value": "RTT (ms)" }
}
}
]
},
{
"type": "timeseries",
"title": "Loss",
"gridPos": { "h": 8, "w": 6, "x": 6, "y": 19 },
"id": 22,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_probe_loss_pct",
"legendFormat": "{{target}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto"
},
"unit": "percent",
"min": 0,
"max": 100,
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
},
"overrides": []
},
"options": {
"tooltip": { "mode": "multi", "sort": "desc" },
"legend": { "displayMode": "table", "placement": "bottom", "calcs": ["lastNotNull", "mean", "max"] }
}
},
{
"type": "timeseries",
"title": "Jitter",
"gridPos": { "h": 8, "w": 6, "x": 12, "y": 19 },
"id": 23,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_probe_jitter_ms",
"legendFormat": "{{target}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 10,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto"
},
"unit": "ms",
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 10 },
{ "color": "red", "value": 30 }
]
}
},
"overrides": []
},
"options": {
"tooltip": { "mode": "multi", "sort": "desc" },
"legend": { "displayMode": "table", "placement": "bottom", "calcs": ["lastNotNull", "mean", "max"] }
}
},
{
"type": "stat",
"title": "Probe Status",
"gridPos": { "h": 8, "w": 6, "x": 18, "y": 19 },
"id": 24,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_probe_up",
"legendFormat": "{{target}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "thresholds" },
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "red", "value": null },
{ "color": "green", "value": 1 }
]
},
"mappings": [
{ "type": "value", "options": { "0": { "text": "DOWN", "color": "red" }, "1": { "text": "UP", "color": "green" } } }
],
"unit": "none",
"min": 0,
"max": 1
},
"overrides": []
},
"options": {
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto",
"colorMode": "background",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto"
}
},
{
"type": "row",
"title": "Web Bridge",
"collapsed": false,
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 27 },
"id": 30,
"panels": []
},
{
"type": "gauge",
"title": "Active Connections",
"gridPos": { "h": 8, "w": 6, "x": 0, "y": 28 },
"id": 31,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_web_active_connections",
"legendFormat": "connections",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "thresholds" },
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 50 },
{ "color": "red", "value": 100 }
]
},
"unit": "none",
"min": 0
},
"overrides": []
},
"options": {
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"showThresholdLabels": false,
"showThresholdMarkers": true
}
},
{
"type": "timeseries",
"title": "Frames Bridged",
"gridPos": { "h": 8, "w": 6, "x": 6, "y": 28 },
"id": 32,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "rate(wzp_web_frames_bridged_total[1m])",
"legendFormat": "{{direction}}",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 20,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto",
"gradientMode": "scheme"
},
"unit": "ops",
"min": 0
},
"overrides": [
{
"matcher": { "id": "byName", "options": "up" },
"properties": [
{ "id": "color", "value": { "fixedColor": "blue", "mode": "fixed" } }
]
},
{
"matcher": { "id": "byName", "options": "down" },
"properties": [
{ "id": "color", "value": { "fixedColor": "purple", "mode": "fixed" } }
]
}
]
},
"options": {
"tooltip": { "mode": "multi", "sort": "desc" },
"legend": { "displayMode": "list", "placement": "bottom" }
}
},
{
"type": "timeseries",
"title": "Auth Failures",
"gridPos": { "h": 8, "w": 6, "x": 12, "y": 28 },
"id": 33,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "rate(wzp_web_auth_failures_total[5m])",
"legendFormat": "auth failures/s",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "fixed", "fixedColor": "red" },
"custom": {
"drawStyle": "line",
"lineInterpolation": "smooth",
"fillOpacity": 20,
"lineWidth": 2,
"pointSize": 5,
"showPoints": "auto",
"spanNulls": false,
"stacking": { "mode": "none", "group": "A" },
"axisPlacement": "auto"
},
"unit": "ops",
"min": 0
},
"overrides": []
},
"options": {
"tooltip": { "mode": "single", "sort": "none" },
"legend": { "displayMode": "list", "placement": "bottom" }
}
},
{
"type": "histogram",
"title": "Handshake Latency",
"gridPos": { "h": 8, "w": 6, "x": 18, "y": 28 },
"id": 34,
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
"targets": [
{
"expr": "wzp_web_handshake_latency_seconds_bucket",
"legendFormat": "{{le}}",
"refId": "A",
"format": "heatmap"
}
],
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"fillOpacity": 80,
"lineWidth": 1,
"gradientMode": "scheme"
},
"unit": "s"
},
"overrides": []
},
"options": {
"tooltip": { "mode": "single", "sort": "none" },
"legend": { "displayMode": "list", "placement": "bottom" },
"bucketOffset": 0,
"combine": false,
"fillOpacity": 80,
"gradientMode": "scheme"
}
}
]
}

143
notes Normal file
View File

@@ -0,0 +1,143 @@
1. Add trunking (biggest win): Multiplex multiple sessions into a single QUIC datagram batch. A TrunkFrame could pack N mini-packets (session_id:2 + payload) into one datagram, sharing the QUIC overhead. This is your multiplexing idea
from the telemetry discussion — the probe test lines are already a step toward this.
2. Mini-frame format: For consecutive packets from the same session, use a 4-byte mini-header (just timestamp delta + payload length) instead of the full 12-byte MediaHeader. IAX2 does this and it cuts header overhead by 67%.
3. Comfort noise / silence suppression: IAX2 supports CN frames — don't send packets during silence, saving ~50% bandwidth in typical conversations. WZP always sends frames even during silence.
4. Jitter buffer from IAX2's design: IAX2 uses adaptive playout delay based on observed jitter, not fixed target depth. This is exactly what T1-S4 should implement — your sweep tool (S3) can guide the parameters.
5. Call control completeness: IAX2 has HOLD, TRANSFER, QUELCH (mute from server), UNQUELCH. WZP's SignalMessage only has Offer/Answer/Hangup/Ringing. Adding these would help featherChat integration.
Priority Recommendations
┌──────────────────────┬────────────────────────┬───────────────────────────────────────────────────┐
│ From IAX2 │ Maps to │ Impact │
├──────────────────────┼────────────────────────┼───────────────────────────────────────────────────┤
│ Trunking │ New P2 task │ High — reduces per-packet overhead at relay scale │
├──────────────────────┼────────────────────────┼───────────────────────────────────────────────────┤
│ Mini-frames │ Wire format change │ Medium — 67% header reduction │
├──────────────────────┼────────────────────────┼───────────────────────────────────────────────────┤
│ Silence suppression │ Codec layer │ Medium — 50% BW savings in typical calls │
├──────────────────────┼────────────────────────┼───────────────────────────────────────────────────┤
│ Adaptive playout │ P2-T1-S4 │ Medium — fixes the jitter/drift issue │
├──────────────────────┼────────────────────────┼───────────────────────────────────────────────────┤
│ Call control signals │ SignalMessage variants │ Low — UX feature for featherChat │
└──────────────────────┴────────────────────────┴───────────────────────────────────────────────────┘
The trunking gap is the most significant architectural difference. Everything else WZP is either on par or ahead (crypto, FEC, identity).
/btwanswer my question , imagine we are connected to Mesh one, our id is 1 , (some eth wallet address) , and i'm trying to call 2 (some other eth address) which is not connected to our relay, like i'm connected to relay 1 , relay one is
connected to relay 3, and relay 3 to relay 2, how do we achieve a connection ?
Great question — this is relay-to-relay routing, and we don't have it yet. Here's what exists vs what's needed:
What We Have
- Client → Relay (direct QUIC)
- Relay → Relay probing (health metrics only, no media forwarding)
- Room-based forwarding within a single relay (SFU)
- Forward mode (relay → one remote relay, hardcoded pair)
What's Missing: Multi-Hop Call Routing
You (wallet 0x1) Peer (wallet 0x2)
│ │
▼ ▼
Relay 1 ──── Relay 3 ──── Relay 2
This requires:
1. Relay Discovery / Routing Table
Each relay needs to know which other relays exist and which users are connected where. Options:
- Gossip protocol: relays share their connected fingerprints with neighbors
- Central registry: featherChat server knows "user 0x2 is on Relay 2"
- DHT: fingerprint → relay mapping distributed across the mesh (like Kademlia)
2. Signaling Path Resolution
When you call 0x2:
1. Your client sends CallOffer through featherChat WS (already works)
2. featherChat knows 0x2's relay (from FC-5 group-to-room mapping or a presence API)
3. CallAnswer comes back with the relay chain: [relay1, relay3, relay2]
3. Media Path Setup
Two approaches:
A. Relay chaining (simpler):
You → Relay1 → Relay3 → Relay2 → Peer
Each relay forwards packets to the next hop. We already have forward mode (--remote) for a single pair — extend it to be per-session with a routing table. Latency adds up but it works.
B. QUIC relay-assisted NAT traversal (better):
You → Relay1 → Relay3 → Relay2 → Peer (signaling)
You ←──────── direct QUIC ──────────→ Peer (media, if NAT allows)
Use relays for signaling + ICE candidate exchange, then attempt a direct connection. Fall back to relay chain if direct fails.
4. Suggested Task Breakdown
┌───────────┬──────────────────────────┬─────────────────────────────────────────────────────────────────────────┐
│ ID │ Task │ Description │
├───────────┼──────────────────────────┼─────────────────────────────────────────────────────────────────────────┤
│ WZP-P3-T4 │ Relay peer registry │ Each relay announces connected fingerprints to neighbors │
├───────────┼──────────────────────────┼─────────────────────────────────────────────────────────────────────────┤
│ WZP-P3-T5 │ Route resolution │ Given a target fingerprint, find the relay chain │
├───────────┼──────────────────────────┼─────────────────────────────────────────────────────────────────────────┤
│ WZP-P3-T6 │ Per-session forwarding │ Relay forwards media for a specific session to next-hop relay │
├───────────┼──────────────────────────┼─────────────────────────────────────────────────────────────────────────┤
│ WZP-P3-T7 │ Call setup over mesh │ Client → FC signaling → resolve route → setup relay chain → media flows │
├───────────┼──────────────────────────┼─────────────────────────────────────────────────────────────────────────┤
│ WZP-P3-T8 │ Direct path optimization │ After relay chain works, attempt ICE/STUN direct connection │
└───────────┴──────────────────────────┴─────────────────────────────────────────────────────────────────────────┘
5. The Practical Shortcut
Since featherChat already has presence (FC-6) and group membership (FC-5), the easiest first pass:
1. featherChat server knows which relay each user registered with (add relay_addr to the auth/connect flow)
2. CallOffer payload already carries relay_addr — the callee's client reads it and connects directly to Relay 1
3. Both clients connect to the same relay (caller's), joining the same hashed room
This avoids multi-hop entirely — the callee just connects to the caller's relay. Only fails if the callee can't reach that relay (firewall, geo), which is when you need the full mesh routing.
Want me to add these as P3 tasks and start with the shortcut approach?
│ Silence suppression VAD/DTX in Opus vs our Yes (comfort noise) vs iax (native)
2. Oalgorithms: Signal pioneered adaptive jitter buffers, dynamic bitrate switching, and bandwidth estimation tuned across millions of calls. WZP's QualityAdapter is a first pass — their algorithms use ML-based predictors.
4. Echo cancellation / noise suppression: Signal uses RNNoise (ML-based), WhatsApp has proprietary DSP. WZP doesn't have any — it relies on the platform's built-in AEC.
3. Group calls: WebRTC SFUs (used by Signal, WhatsApp) handle 32+ participants with simulcast (multiple quality layers). WZP's room mode is a basic SFU without simulcast or quality layering.
Yes, partially. Here's what you can realistically use:
RNNoise (echo/noise suppression) — This is the easiest win. RNNoise is BSD-licensed C code, and there's a Rust crate (nnnoiseless — a pure Rust port). You could add it as a pre-processing step in CallEncoder before Opus encoding. No
legal issues, no dependency on Signal.
Jitter buffer algorithms — Signal's adaptive jitter buffer code is in their open-source ringrtc repo (GPLv3). You can study the algorithm (NetEq-style adaptive playout) but can't copy code directly into WZP (MIT) due to GPL. However,
the concepts (ML-based delay prediction, talkspurt detection) are well-documented in academic papers and WebRTC's NetEq source (BSD-licensed). You could:
- Port ideas from WebRTC's NetEq (BSD) — this is what Signal built on anyway
- Use your T5 telemetry + sweep tool to train a simple predictor specific to WZP
Bandwidth estimation — WebRTC's GCC (Google Congestion Control) is BSD-licensed and well-documented. You could implement a simplified version using your existing QualityReport data. Signal's version is derivative of this.
Simulcast / group quality layers — The concept is straightforward (encode at 2-3 quality levels, relay picks per-receiver). You already have QualityProfile with GOOD/DEGRADED/CATASTROPHIC — extend the encoder to produce multiple
streams and the relay to select per-participant. No need to copy Signal code for this.
What you CAN'T directly use:
- Signal's proprietary ML models (not published)
- WhatsApp's DSP (closed source)
- GPL code in MIT project without relicensing
Practical priority:
1. nnnoiseless crate for noise suppression (drop-in, huge quality win)
2. WebRTC NetEq-style adaptive playout for T1-S4 (BSD, replaces your fixed jitter buffer)
3. GCC-style bandwidth estimation for P3-T3
4. Simulcast for group calls (design from scratch, concept is simple)
The biggest gap is noise suppression — adding nnnoiseless would be maybe 2-3 hours of work and immediately noticeable.

View File

@@ -4,107 +4,240 @@ set -euo pipefail
# Build WarzonePhone Linux x86_64 release binaries using a Hetzner Cloud VPS. # Build WarzonePhone Linux x86_64 release binaries using a Hetzner Cloud VPS.
# Prerequisites: hcloud CLI authenticated, SSH key "wz" registered. # Prerequisites: hcloud CLI authenticated, SSH key "wz" registered.
# #
# Usage: ./scripts/build-linux.sh # Usage:
# ./scripts/build-linux.sh --prepare Create VM, install deps, upload source
# ./scripts/build-linux.sh --build Build release binaries on the VM
# ./scripts/build-linux.sh --transfer Download binaries from VM to local
# ./scripts/build-linux.sh --destroy Delete the VM
# ./scripts/build-linux.sh --all Run prepare + build + transfer (no destroy)
# #
# Outputs: target/linux-x86_64/wzp-relay, wzp-client, wzp-bench # The VM persists between steps so you can iterate on build errors.
SSH_KEY_NAME="wz" SSH_KEY_NAME="wz"
SSH_KEY_PATH="/Users/manwe/CascadeProjects/wzp" SSH_KEY_PATH="/Users/manwe/CascadeProjects/wzp"
SERVER_NAME="wzp-builder-$(date +%s)"
SERVER_TYPE="cx33" SERVER_TYPE="cx33"
IMAGE="debian-12" IMAGE="debian-12"
REMOTE_USER="root" REMOTE_USER="root"
OUTPUT_DIR="target/linux-x86_64" OUTPUT_DIR="target/linux-x86_64"
PROJECT_DIR="/Users/manwe/CascadeProjects/warzonePhone"
echo "=== WarzonePhone Linux Build ===" SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=10"
# Ensure server gets deleted on any exit (success or failure) # ---------------------------------------------------------------------------
cleanup() { # Helpers
if [ -n "${SERVER_NAME:-}" ]; then # ---------------------------------------------------------------------------
echo " Cleaning up server $SERVER_NAME..."
hcloud server delete "$SERVER_NAME" 2>/dev/null || true get_vm_ip() {
local ip
ip=$(hcloud server list -o columns=ipv4 -o noheader 2>/dev/null | tail -1 | tr -d ' ')
if [ -z "$ip" ]; then
echo "ERROR: No Hetzner VM found. Run --prepare first." >&2
exit 1
fi fi
rm -f /tmp/wzp-src.tar.gz echo "$ip"
} }
trap cleanup EXIT
# 1. Create the build server ssh_cmd() {
echo "[1/7] Creating Hetzner server..." local ip
hcloud server create \ ip=$(get_vm_ip)
--name "$SERVER_NAME" \ ssh $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip" "$@"
--type "$SERVER_TYPE" \ }
--image "$IMAGE" \
--ssh-key "$SSH_KEY_NAME" \
--location fsn1 \
--quiet
SERVER_IP=$(hcloud server ip "$SERVER_NAME") scp_cmd() {
echo " Server: $SERVER_NAME @ $SERVER_IP" local ip
ip=$(get_vm_ip)
scp $SSH_OPTS -i "$SSH_KEY_PATH" "$@"
}
# SSH options: skip host key check, use our key get_vm_name() {
SSH="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=10 -i $SSH_KEY_PATH $REMOTE_USER@$SERVER_IP" hcloud server list -o columns=name -o noheader 2>/dev/null | tail -1 | tr -d ' '
SCP="scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i $SSH_KEY_PATH" }
# 2. Wait for SSH to come up # ---------------------------------------------------------------------------
echo "[2/7] Waiting for SSH..." # --prepare: Create VM, install deps, upload source
for i in $(seq 1 30); do # ---------------------------------------------------------------------------
if $SSH "echo ok" &>/dev/null; then
break do_prepare() {
local server_name="wzp-builder"
# Check if VM already exists
local existing
existing=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep wzp-builder || true)
if [ -n "$existing" ]; then
echo "VM already exists: $existing"
echo "Reusing it. Uploading fresh source..."
do_upload
return
fi fi
sleep 2
done
# 3. Install build dependencies echo "[1/5] Creating Hetzner VM..."
echo "[3/7] Installing build dependencies..." hcloud server create \
$SSH "apt-get update -qq && apt-get install -y -qq build-essential cmake pkg-config libasound2-dev curl git > /dev/null 2>&1" --name "$server_name" \
--type "$SERVER_TYPE" \
--image "$IMAGE" \
--ssh-key "$SSH_KEY_NAME" \
--location fsn1 \
--quiet
# 4. Install Rust local ip
echo "[4/7] Installing Rust..." ip=$(get_vm_ip)
$SSH "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable > /dev/null 2>&1" echo " VM: $server_name @ $ip"
# 5. Upload source code # Wait for SSH
echo "[5/7] Uploading source code..." echo "[2/5] Waiting for SSH..."
# Create a tarball excluding target/ and .git/ for i in $(seq 1 30); do
tar czf /tmp/wzp-src.tar.gz \ if ssh $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip" "echo ok" &>/dev/null; then
--exclude='target' \ break
--exclude='.git' \ fi
--exclude='.claude' \ sleep 2
-C /Users/manwe/CascadeProjects/warzonePhone . done
$SCP /tmp/wzp-src.tar.gz "$REMOTE_USER@$SERVER_IP:/root/wzp-src.tar.gz" # Install build dependencies
$SSH "mkdir -p /root/warzonePhone && tar xzf /root/wzp-src.tar.gz -C /root/warzonePhone" echo "[3/5] Installing build dependencies..."
ssh_cmd "apt-get update -qq && apt-get install -y -qq build-essential cmake pkg-config libasound2-dev libssl-dev curl git libstdc++-12-dev > /dev/null 2>&1"
# 6. Build release binaries (headless + audio variants) # Install Rust
echo "[6/8] Building all binaries..." echo "[4/5] Installing Rust..."
$SSH "source ~/.cargo/env && cd /root/warzonePhone && cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-bench --bin wzp-web 2>&1" | tail -3 ssh_cmd "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable > /dev/null 2>&1"
echo "[7/8] Building audio-enabled client..." # Upload source
$SSH "source ~/.cargo/env && cd /root/warzonePhone && cargo build --release --bin wzp-client --features audio 2>&1" | tail -3 echo "[5/5] Uploading source code..."
$SSH "cp /root/warzonePhone/target/release/wzp-client /root/warzonePhone/target/release/wzp-client-audio" do_upload
$SSH "source ~/.cargo/env && cd /root/warzonePhone && cargo build --release --bin wzp-client 2>&1" | tail -1
# 8. Download binaries + static files echo ""
echo "[8/8] Downloading binaries..." echo "=== VM Ready ==="
mkdir -p "$OUTPUT_DIR/static" echo "IP: $ip"
$SCP "$REMOTE_USER@$SERVER_IP:/root/warzonePhone/target/release/wzp-relay" "$OUTPUT_DIR/wzp-relay" echo "SSH: ssh -i $SSH_KEY_PATH root@$ip"
$SCP "$REMOTE_USER@$SERVER_IP:/root/warzonePhone/target/release/wzp-client" "$OUTPUT_DIR/wzp-client" echo ""
$SCP "$REMOTE_USER@$SERVER_IP:/root/warzonePhone/target/release/wzp-client-audio" "$OUTPUT_DIR/wzp-client-audio" echo "Next: ./scripts/build-linux.sh --build"
$SCP "$REMOTE_USER@$SERVER_IP:/root/warzonePhone/target/release/wzp-bench" "$OUTPUT_DIR/wzp-bench" }
$SCP "$REMOTE_USER@$SERVER_IP:/root/warzonePhone/target/release/wzp-web" "$OUTPUT_DIR/wzp-web"
$SCP "$REMOTE_USER@$SERVER_IP:/root/warzonePhone/crates/wzp-web/static/index.html" "$OUTPUT_DIR/static/index.html"
# Show results (server is deleted by EXIT trap) do_upload() {
echo "" echo " Creating source tarball..."
echo "=== Build Complete ===" tar czf /tmp/wzp-src.tar.gz \
ls -lh "$OUTPUT_DIR"/wzp-* --exclude='target' \
echo "" --exclude='.git' \
echo "Binaries:" --exclude='.claude' \
echo " wzp-relay — relay daemon" --exclude='notes' \
echo " wzp-client — headless client (--send-tone, --record)" -C "$PROJECT_DIR" . 2>/dev/null
echo " wzp-client-audio — client with mic/speakers (needs libasound2)"
echo " wzp-web — web bridge (serve with static/ folder)" local ip
echo " wzp-bench — benchmarks" ip=$(get_vm_ip)
echo " static/ — web UI files" echo " Uploading to VM..."
echo "" scp $SSH_OPTS -i "$SSH_KEY_PATH" /tmp/wzp-src.tar.gz "$REMOTE_USER@$ip:/root/wzp-src.tar.gz" 2>/dev/null
echo "Deploy with:" ssh_cmd "rm -rf /root/warzonePhone && mkdir -p /root/warzonePhone && tar xzf /root/wzp-src.tar.gz -C /root/warzonePhone" 2>/dev/null
echo " scp $OUTPUT_DIR/wzp-* user@server:~/wzp/" rm -f /tmp/wzp-src.tar.gz
echo " Source uploaded."
}
# ---------------------------------------------------------------------------
# --build: Build release binaries on the VM
# ---------------------------------------------------------------------------
do_build() {
local ip
ip=$(get_vm_ip)
echo "=== Building on $ip ==="
echo "[1/3] Building relay + client + web..."
ssh_cmd "source ~/.cargo/env && cd /root/warzonePhone && cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-bench --bin wzp-web 2>&1"
echo ""
echo "[2/3] Building audio-enabled client..."
ssh_cmd "source ~/.cargo/env && cd /root/warzonePhone && cargo build --release --bin wzp-client --features audio 2>&1" | tail -5
ssh_cmd "cp /root/warzonePhone/target/release/wzp-client /root/warzonePhone/target/release/wzp-client-audio"
ssh_cmd "source ~/.cargo/env && cd /root/warzonePhone && cargo build --release --bin wzp-client 2>&1" | tail -3
echo ""
echo "[3/3] Verifying binaries..."
ssh_cmd "ls -lh /root/warzonePhone/target/release/wzp-relay /root/warzonePhone/target/release/wzp-client /root/warzonePhone/target/release/wzp-web /root/warzonePhone/target/release/wzp-bench /root/warzonePhone/target/release/wzp-client-audio"
echo ""
echo "=== Build Complete ==="
echo "Next: ./scripts/build-linux.sh --transfer"
}
# ---------------------------------------------------------------------------
# --transfer: Download binaries from VM to local
# ---------------------------------------------------------------------------
do_transfer() {
local ip
ip=$(get_vm_ip)
echo "=== Downloading binaries from $ip ==="
mkdir -p "$OUTPUT_DIR/static"
for bin in wzp-relay wzp-client wzp-client-audio wzp-bench wzp-web; do
echo " $bin..."
scp $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip:/root/warzonePhone/target/release/$bin" "$OUTPUT_DIR/$bin" 2>/dev/null
done
# Static files for web bridge
scp $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip:/root/warzonePhone/crates/wzp-web/static/index.html" "$OUTPUT_DIR/static/index.html" 2>/dev/null
scp $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip:/root/warzonePhone/crates/wzp-web/static/audio-processor.js" "$OUTPUT_DIR/static/audio-processor.js" 2>/dev/null
echo ""
echo "=== Transfer Complete ==="
ls -lh "$OUTPUT_DIR"/wzp-*
echo ""
echo "Deploy with:"
echo " scp $OUTPUT_DIR/wzp-relay $OUTPUT_DIR/wzp-client user@server:~/wzp/"
}
# ---------------------------------------------------------------------------
# --destroy: Delete the VM
# ---------------------------------------------------------------------------
do_destroy() {
local name
name=$(get_vm_name)
if [ -z "$name" ]; then
echo "No VM to destroy."
return
fi
echo "Deleting VM: $name"
hcloud server delete "$name"
echo "Done."
}
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
case "${1:-}" in
--prepare)
do_prepare
;;
--build)
do_build
;;
--transfer)
do_transfer
;;
--destroy)
do_destroy
;;
--all)
do_prepare
do_build
do_transfer
echo ""
echo "VM is still running. Destroy with: ./scripts/build-linux.sh --destroy"
;;
--upload)
do_upload
;;
*)
echo "Usage: $0 {--prepare|--build|--transfer|--destroy|--all|--upload}"
echo ""
echo "Steps:"
echo " --prepare Create VM, install deps, upload source"
echo " --build Build release binaries (shows full output)"
echo " --transfer Download binaries to target/linux-x86_64/"
echo " --destroy Delete the VM"
echo " --all prepare + build + transfer (VM persists)"
echo " --upload Re-upload source to existing VM"
exit 1
;;
esac