2 Commits

Author SHA1 Message Date
Siavash Sameni
e9e0d8d212 fix: replace tracing-android with android_logger (no sharded_slab SIGSEGV)
Some checks failed
Mirror to GitHub / mirror (push) Failing after 35s
Build Release Binaries / build-amd64 (push) Failing after 3m47s
tracing_subscriber::registry() allocates a sharded_slab which causes
SIGSEGV on Android 16 MTE devices during nativeInit. catch_unwind
can't catch SIGSEGV (it's a signal, not a panic).

Replace with android_logger (lightweight, no large allocations) +
tracing-log bridge so tracing::info! macros still work via logcat.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 06:01:07 +04:00
Siavash Sameni
4e0356ef37 feat: desktop-style UI rewrite — dark theme, relay manager, identicons
Some checks failed
Mirror to GitHub / mirror (push) Failing after 42s
Build Release Binaries / build-amd64 (push) Failing after 3m34s
Complete InCallScreen rewrite matching desktop layout:

Connect screen:
- Dark theme (matching desktop CSS vars)
- Relay button with lock icon + RTT, opens Manage Relays dialog
- Room/Alias text fields, AEC checkbox, settings gear
- Full-width red Connect button
- Identity with identicon + fingerprint
- Recent rooms grouped and colored by server

Manage Relays dialog:
- Server list with identicons, lock icons, RTT, delete
- Selected server highlighted with accent border
- Add relay inputs + button

In-call screen:
- Room name + gear, green dot + timer
- Gradient audio level meter
- Participant card with identicons + fingerprints
- Mic / End / Spk controls
- TX/RX stats

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 05:38:57 +04:00
37 changed files with 177 additions and 2825 deletions

View File

@@ -1,72 +0,0 @@
---
name: caveman
description: >
Ultra-compressed communication mode. Slash token usage ~75% by speaking like caveman
while keeping full technical accuracy. Use when user says "caveman mode", "talk like caveman",
"use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers
when token efficiency is requested.
---
# Caveman Mode
## Core Rule
Respond like smart caveman. Cut articles, filler, pleasantries. Keep all technical substance.
## Grammar
- Drop articles (a, an, the)
- Drop filler (just, really, basically, actually, simply)
- Drop pleasantries (sure, certainly, of course, happy to)
- Short synonyms (big not extensive, fix not "implement a solution for")
- No hedging (skip "it might be worth considering")
- Fragments fine. No need full sentence
- Technical terms stay exact. "Polymorphism" stays "polymorphism"
- Code blocks unchanged. Caveman speak around code, not in code
- Error messages quoted exact. Caveman only for explanation
## Pattern
```
[thing] [action] [reason]. [next step].
```
Not:
> Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by...
Yes:
> Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:
## Examples
**User:** Why is my React component re-rendering?
**Normal (69 tokens):** "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."
**Caveman (19 tokens):** "New object ref each render. Inline object prop = new ref = re-render. Wrap in `useMemo`."
---
**User:** How do I set up a PostgreSQL connection pool?
**Caveman:**
```
Use `pg` pool:
```
```js
const pool = new Pool({
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
})
```
```
max = concurrent connections. Keep under DB limit. idleTimeout kill stale conn.
```
## Boundaries
- Code: write normal. Caveman English only
- Git commits: normal
- PR descriptions: normal
- User say "stop caveman" or "normal mode": revert immediately

241
Cargo.lock generated
View File

@@ -297,12 +297,6 @@ dependencies = [
"tower-service", "tower-service",
] ]
[[package]]
name = "base16ct"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c7f02d4ea65f2c1853089ffd8d2787bdbc63de2f0d29dedbcf8ccdfa0ccd4cf"
[[package]] [[package]]
name = "base64" name = "base64"
version = "0.22.1" version = "0.22.1"
@@ -473,7 +467,6 @@ dependencies = [
"iana-time-zone", "iana-time-zone",
"js-sys", "js-sys",
"num-traits", "num-traits",
"serde",
"wasm-bindgen", "wasm-bindgen",
"windows-link", "windows-link",
] ]
@@ -634,24 +627,6 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "crunchy"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "460fbee9c2c2f33933d720630a6a0bac33ba7053db5344fac858d4b8952d77d5"
[[package]]
name = "crypto-bigint"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0dc92fb57ca44df6db8059111ab3af99a63d5d0f8375d9972e319a379c6bab76"
dependencies = [
"generic-array",
"rand_core 0.6.4",
"subtle",
"zeroize",
]
[[package]] [[package]]
name = "crypto-common" name = "crypto-common"
version = "0.1.7" version = "0.1.7"
@@ -675,7 +650,6 @@ dependencies = [
"digest", "digest",
"fiat-crypto", "fiat-crypto",
"rustc_version", "rustc_version",
"serde",
"subtle", "subtle",
"zeroize", "zeroize",
] ]
@@ -842,32 +816,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
dependencies = [ dependencies = [
"block-buffer", "block-buffer",
"const-oid",
"crypto-common", "crypto-common",
"subtle", "subtle",
] ]
[[package]]
name = "dirs"
version = "6.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3e8aa94d75141228480295a7d0e7feb620b1a5ad9f12bc40be62411e38cce4e"
dependencies = [
"dirs-sys",
]
[[package]]
name = "dirs-sys"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e01a3366d27ee9890022452ee61b2b63a67e6f13f58900b651ff5665f0bb1fab"
dependencies = [
"libc",
"option-ext",
"redox_users",
"windows-sys 0.61.2",
]
[[package]] [[package]]
name = "displaydoc" name = "displaydoc"
version = "0.2.5" version = "0.2.5"
@@ -898,21 +850,6 @@ dependencies = [
"rustfft", "rustfft",
] ]
[[package]]
name = "ecdsa"
version = "0.16.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ee27f32b5c5292967d2d4a9d7f1e0b0aed2c15daded5a60300e4abb9d8020bca"
dependencies = [
"der",
"digest",
"elliptic-curve",
"rfc6979",
"serdect",
"signature",
"spki",
]
[[package]] [[package]]
name = "ed25519" name = "ed25519"
version = "2.2.3" version = "2.2.3"
@@ -920,7 +857,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "115531babc129696a58c64a4fef0a8bf9e9698629fb97e9e40767d235cfbcd53" checksum = "115531babc129696a58c64a4fef0a8bf9e9698629fb97e9e40767d235cfbcd53"
dependencies = [ dependencies = [
"pkcs8", "pkcs8",
"serde",
"signature", "signature",
] ]
@@ -945,26 +881,6 @@ version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719" checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
[[package]]
name = "elliptic-curve"
version = "0.13.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5e6043086bf7973472e0c7dff2142ea0b680d30e18d9cc40f267efbf222bd47"
dependencies = [
"base16ct",
"crypto-bigint",
"digest",
"ff",
"generic-array",
"group",
"pkcs8",
"rand_core 0.6.4",
"sec1",
"serdect",
"subtle",
"zeroize",
]
[[package]] [[package]]
name = "encoding_rs" name = "encoding_rs"
version = "0.8.35" version = "0.8.35"
@@ -1008,16 +924,6 @@ version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be" checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]]
name = "ff"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0b50bfb653653f9ca9095b427bed08ab8d75a137839d9ad64eb11810d5b6393"
dependencies = [
"rand_core 0.6.4",
"subtle",
]
[[package]] [[package]]
name = "fiat-crypto" name = "fiat-crypto"
version = "0.2.9" version = "0.2.9"
@@ -1178,7 +1084,6 @@ checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
dependencies = [ dependencies = [
"typenum", "typenum",
"version_check", "version_check",
"zeroize",
] ]
[[package]] [[package]]
@@ -1238,17 +1143,6 @@ version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0cc23270f6e1808e30a928bdc84dea0b9b4136a8bc82338574f23baf47bbd280" checksum = "0cc23270f6e1808e30a928bdc84dea0b9b4136a8bc82338574f23baf47bbd280"
[[package]]
name = "group"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0f9ef7462f7c099f518d754361858f86d8a07af53ba9af0fe635bbccb151a63"
dependencies = [
"ff",
"rand_core 0.6.4",
"subtle",
]
[[package]] [[package]]
name = "h2" name = "h2"
version = "0.4.13" version = "0.4.13"
@@ -1732,21 +1626,6 @@ dependencies = [
"wasm-bindgen", "wasm-bindgen",
] ]
[[package]]
name = "k256"
version = "0.13.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6e3919bbaa2945715f0bb6d3934a173d1e9a59ac23767fbaaef277265a7411b"
dependencies = [
"cfg-if",
"ecdsa",
"elliptic-curve",
"once_cell",
"serdect",
"sha2",
"signature",
]
[[package]] [[package]]
name = "lazy_static" name = "lazy_static"
version = "1.5.0" version = "1.5.0"
@@ -1781,15 +1660,6 @@ version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6d2cec3eae94f9f509c767b45932f1ada8350c4bdb85af2fcab4a3c14807981" checksum = "b6d2cec3eae94f9f509c767b45932f1ada8350c4bdb85af2fcab4a3c14807981"
[[package]]
name = "libredox"
version = "0.1.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ddbf48fd451246b1f8c2610bd3b4ac0cc6e149d89832867093ab69a17194f08"
dependencies = [
"libc",
]
[[package]] [[package]]
name = "linux-raw-sys" name = "linux-raw-sys"
version = "0.12.1" version = "0.12.1"
@@ -1832,15 +1702,6 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "matchers"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9"
dependencies = [
"regex-automata",
]
[[package]] [[package]]
name = "matchit" name = "matchit"
version = "0.7.3" version = "0.7.3"
@@ -2119,12 +1980,6 @@ dependencies = [
"vcpkg", "vcpkg",
] ]
[[package]]
name = "option-ext"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d"
[[package]] [[package]]
name = "os_str_bytes" name = "os_str_bytes"
version = "6.6.1" version = "6.6.1"
@@ -2465,17 +2320,6 @@ dependencies = [
"bitflags 2.11.0", "bitflags 2.11.0",
] ]
[[package]]
name = "redox_users"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4e608c6638b9c18977b00b475ac1f28d14e84b27d8d42f70e0bf1e3dec127ac"
dependencies = [
"getrandom 0.2.17",
"libredox",
"thiserror 2.0.18",
]
[[package]] [[package]]
name = "regex" name = "regex"
version = "1.12.3" version = "1.12.3"
@@ -2545,16 +2389,6 @@ dependencies = [
"web-sys", "web-sys",
] ]
[[package]]
name = "rfc6979"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8dd2a808d456c4a54e300a23e9f5a67e122c3024119acbfd73e3bf664491cb2"
dependencies = [
"hmac",
"subtle",
]
[[package]] [[package]]
name = "ring" name = "ring"
version = "0.17.14" version = "0.17.14"
@@ -2733,21 +2567,6 @@ version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "sec1"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3e97a565f76233a6003f9f5c54be1d9c5bdfa3eccfb189469f11ec4901c47dc"
dependencies = [
"base16ct",
"der",
"generic-array",
"pkcs8",
"serdect",
"subtle",
"zeroize",
]
[[package]] [[package]]
name = "security-framework" name = "security-framework"
version = "3.7.0" version = "3.7.0"
@@ -2852,16 +2671,6 @@ dependencies = [
"serde", "serde",
] ]
[[package]]
name = "serdect"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a84f14a19e9a014bb9f4512488d9829a68e04ecabffb0f9904cd1ace94598177"
dependencies = [
"base16ct",
"serde",
]
[[package]] [[package]]
name = "sha1" name = "sha1"
version = "0.10.6" version = "0.10.6"
@@ -2915,7 +2724,6 @@ version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de" checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de"
dependencies = [ dependencies = [
"digest",
"rand_core 0.6.4", "rand_core 0.6.4",
] ]
@@ -3129,15 +2937,6 @@ version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7694e1cfe791f8d31026952abf09c69ca6f6fa4e1a1229e18988f06a04a12dca" checksum = "7694e1cfe791f8d31026952abf09c69ca6f6fa4e1a1229e18988f06a04a12dca"
[[package]]
name = "tiny-keccak"
version = "2.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c9d3793400a45f954c52e73d068316d76b6f4e36977e3fcebb13a2721e80237"
dependencies = [
"crunchy",
]
[[package]] [[package]]
name = "tinystr" name = "tinystr"
version = "0.8.2" version = "0.8.2"
@@ -3436,14 +3235,10 @@ version = "0.3.23"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319" checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319"
dependencies = [ dependencies = [
"matchers",
"nu-ansi-term", "nu-ansi-term",
"once_cell",
"regex-automata",
"sharded-slab", "sharded-slab",
"smallvec", "smallvec",
"thread_local", "thread_local",
"tracing",
"tracing-core", "tracing-core",
"tracing-log", "tracing-log",
] ]
@@ -3572,18 +3367,6 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be" checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
[[package]]
name = "uuid"
version = "1.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5ac8b6f42ead25368cf5b098aeb3dc8a1a2c05a3eee8a9a1a68c640edbfc79d9"
dependencies = [
"getrandom 0.4.2",
"js-sys",
"serde_core",
"wasm-bindgen",
]
[[package]] [[package]]
name = "valuable" name = "valuable"
version = "0.1.1" version = "0.1.1"
@@ -3623,28 +3406,7 @@ dependencies = [
[[package]] [[package]]
name = "warzone-protocol" name = "warzone-protocol"
version = "0.0.38" version = "0.1.0"
dependencies = [
"base64",
"bincode",
"bip39",
"chacha20poly1305",
"chrono",
"curve25519-dalek",
"ed25519-dalek",
"hex",
"hkdf",
"k256",
"rand 0.8.5",
"serde",
"serde_json",
"sha2",
"thiserror 2.0.18",
"tiny-keccak",
"uuid",
"x25519-dalek",
"zeroize",
]
[[package]] [[package]]
name = "wasi" name = "wasi"
@@ -4370,7 +4132,6 @@ dependencies = [
"async-trait", "async-trait",
"axum 0.7.9", "axum 0.7.9",
"bytes", "bytes",
"dirs",
"futures-util", "futures-util",
"prometheus", "prometheus",
"quinn", "quinn",

View File

@@ -40,7 +40,7 @@ codec2 = "0.3"
# Crypto # Crypto
x25519-dalek = { version = "2", features = ["static_secrets"] } x25519-dalek = { version = "2", features = ["static_secrets"] }
ed25519-dalek = { version = "2", features = ["rand_core", "pkcs8"] } ed25519-dalek = { version = "2", features = ["rand_core"] }
chacha20poly1305 = "0.10" chacha20poly1305 = "0.10"
hkdf = "0.12" hkdf = "0.12"
sha2 = "0.10" sha2 = "0.10"

View File

@@ -57,7 +57,7 @@ class AudioPipeline(private val context: Context) {
/** Whether to attach hardware AEC. Must be set before start(). */ /** Whether to attach hardware AEC. Must be set before start(). */
var aecEnabled: Boolean = true var aecEnabled: Boolean = true
/** Enable debug recording of PCM + RMS histogram to cache dir. */ /** Enable debug recording of PCM + RMS histogram to cache dir. */
var debugRecording: Boolean = false var debugRecording: Boolean = true
private var captureThread: Thread? = null private var captureThread: Thread? = null
private var playoutThread: Thread? = null private var playoutThread: Thread? = null

View File

@@ -28,7 +28,6 @@ class SettingsRepository(context: Context) {
private const val KEY_PREFER_IPV6 = "prefer_ipv6" private const val KEY_PREFER_IPV6 = "prefer_ipv6"
private const val KEY_IDENTITY_SEED = "identity_seed_hex" private const val KEY_IDENTITY_SEED = "identity_seed_hex"
private const val KEY_AEC_ENABLED = "aec_enabled" private const val KEY_AEC_ENABLED = "aec_enabled"
private const val KEY_DEBUG_RECORDING = "debug_recording"
private const val KEY_RECENT_ROOMS = "recent_rooms" private const val KEY_RECENT_ROOMS = "recent_rooms"
private const val TOFU_PREFIX = "tofu_" private const val TOFU_PREFIX = "tofu_"
} }
@@ -121,16 +120,6 @@ class SettingsRepository(context: Context) {
fun saveAecEnabled(enabled: Boolean) { prefs.edit().putBoolean(KEY_AEC_ENABLED, enabled).apply() } fun saveAecEnabled(enabled: Boolean) { prefs.edit().putBoolean(KEY_AEC_ENABLED, enabled).apply() }
fun loadAecEnabled(): Boolean = prefs.getBoolean(KEY_AEC_ENABLED, true) fun loadAecEnabled(): Boolean = prefs.getBoolean(KEY_AEC_ENABLED, true)
// --- Debug recording ---
fun saveDebugRecording(enabled: Boolean) { prefs.edit().putBoolean(KEY_DEBUG_RECORDING, enabled).apply() }
fun loadDebugRecording(): Boolean = prefs.getBoolean(KEY_DEBUG_RECORDING, false)
// --- Codec choice ---
// 0 = Opus (GOOD), 1 = Opus Low (DEGRADED), 2 = Codec2 (CATASTROPHIC)
fun saveCodecChoice(choice: Int) { prefs.edit().putInt("codec_choice", choice).apply() }
fun loadCodecChoice(): Int = prefs.getInt("codec_choice", 0)
// --- Identity seed --- // --- Identity seed ---
/** /**
@@ -190,14 +179,4 @@ class SettingsRepository(context: Context) {
fun loadServerFingerprint(address: String): String? { fun loadServerFingerprint(address: String): String? {
return prefs.getString("$TOFU_PREFIX$address", null) return prefs.getString("$TOFU_PREFIX$address", null)
} }
// --- Ping RTT cache ---
fun savePingRtt(address: String, rttMs: Int) {
prefs.edit().putInt("ping_rtt_$address", rttMs).apply()
}
fun loadPingRtt(address: String): Int {
return prefs.getInt("ping_rtt_$address", -1)
}
} }

View File

@@ -38,12 +38,9 @@ class WzpEngine(private val callback: WzpCallback) {
* @param alias display name sent to relay for room participant list * @param alias display name sent to relay for room participant list
* @return 0 on success, negative error code on failure * @return 0 on success, negative error code on failure
*/ */
/** fun startCall(relayAddr: String, room: String, seedHex: String = "", token: String = "", alias: String = ""): Int {
* @param profile 0 = Opus GOOD, 1 = Opus DEGRADED, 2 = Codec2 CATASTROPHIC
*/
fun startCall(relayAddr: String, room: String, seedHex: String = "", token: String = "", alias: String = "", profile: Int = 0): Int {
check(nativeHandle != 0L) { "Engine not initialized" } check(nativeHandle != 0L) { "Engine not initialized" }
val result = nativeStartCall(nativeHandle, relayAddr, room, seedHex, token, alias, profile) val result = nativeStartCall(nativeHandle, relayAddr, room, seedHex, token, alias)
if (result == 0) { if (result == 0) {
callback.onCallStateChanged(CallStateConstants.CONNECTING) callback.onCallStateChanged(CallStateConstants.CONNECTING)
} else { } else {
@@ -144,7 +141,7 @@ class WzpEngine(private val callback: WzpCallback) {
private external fun nativeInit(): Long private external fun nativeInit(): Long
private external fun nativeStartCall( private external fun nativeStartCall(
handle: Long, relay: String, room: String, seed: String, token: String, alias: String, profile: Int handle: Long, relay: String, room: String, seed: String, token: String, alias: String
): Int ): Int
private external fun nativeStopCall(handle: Long) private external fun nativeStopCall(handle: Long)
private external fun nativeSetMute(handle: Long, muted: Boolean) private external fun nativeSetMute(handle: Long, muted: Boolean)
@@ -156,21 +153,20 @@ class WzpEngine(private val callback: WzpCallback) {
private external fun nativeWriteAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, sampleCount: Int): Int private external fun nativeWriteAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, sampleCount: Int): Int
private external fun nativeReadAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, maxSamples: Int): Int private external fun nativeReadAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, maxSamples: Int): Int
private external fun nativeDestroy(handle: Long) private external fun nativeDestroy(handle: Long)
private external fun nativePingRelay(handle: Long, relay: String): String?
/**
* Ping a relay server. Requires engine to be initialized.
* Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or null.
*/
fun pingRelay(address: String): String? {
if (nativeHandle == 0L) return null
return nativePingRelay(nativeHandle, address)
}
companion object { companion object {
init { init {
System.loadLibrary("wzp_android") System.loadLibrary("wzp_android")
} }
/**
* Ping a relay server. Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}`
* or null if unreachable. Does not require an engine instance.
*/
fun pingRelay(address: String): String? = nativePingRelay(address)
@JvmStatic
private external fun nativePingRelay(relay: String): String?
} }
} }

View File

@@ -1,12 +0,0 @@
package com.wzp.net
// Relay pinging is now done via WzpEngine.pingRelay() (instance method).
// This file kept for the data class only.
object RelayPinger {
data class PingResult(
val rttMs: Int,
val reachable: Boolean,
val serverFingerprint: String = "",
)
}

View File

@@ -31,8 +31,7 @@ data class ServerEntry(val address: String, val label: String)
data class PingResult( data class PingResult(
val rttMs: Int, val rttMs: Int,
val serverFingerprint: String = "", val serverFingerprint: String,
val reachable: Boolean = rttMs > 0,
) )
enum class LockStatus { UNKNOWN, OFFLINE, NEW, VERIFIED, CHANGED } enum class LockStatus { UNKNOWN, OFFLINE, NEW, VERIFIED, CHANGED }
@@ -106,18 +105,6 @@ class CallViewModel : ViewModel(), WzpCallback {
private val _aecEnabled = MutableStateFlow(true) private val _aecEnabled = MutableStateFlow(true)
val aecEnabled: StateFlow<Boolean> = _aecEnabled.asStateFlow() val aecEnabled: StateFlow<Boolean> = _aecEnabled.asStateFlow()
private val _debugRecording = MutableStateFlow(false)
val debugRecording: StateFlow<Boolean> = _debugRecording.asStateFlow()
// Quality profile index (matches JNI bridge profile_from_int)
private val _codecChoice = MutableStateFlow(0)
val codecChoice: StateFlow<Int> = _codecChoice.asStateFlow()
/** Key-change warning dialog state. */
data class KeyWarningInfo(val address: String, val oldFp: String, val newFp: String)
private val _keyWarning = MutableStateFlow<KeyWarningInfo?>(null)
val keyWarning: StateFlow<KeyWarningInfo?> = _keyWarning.asStateFlow()
/** True when a call just ended and debug report can be sent. */ /** True when a call just ended and debug report can be sent. */
private val _debugReportAvailable = MutableStateFlow(false) private val _debugReportAvailable = MutableStateFlow(false)
val debugReportAvailable: StateFlow<Boolean> = _debugReportAvailable.asStateFlow() val debugReportAvailable: StateFlow<Boolean> = _debugReportAvailable.asStateFlow()
@@ -172,8 +159,6 @@ class CallViewModel : ViewModel(), WzpCallback {
_captureGainDb.value = s.loadCaptureGain() _captureGainDb.value = s.loadCaptureGain()
_seedHex.value = s.getOrCreateSeedHex() _seedHex.value = s.getOrCreateSeedHex()
_aecEnabled.value = s.loadAecEnabled() _aecEnabled.value = s.loadAecEnabled()
_debugRecording.value = s.loadDebugRecording()
_codecChoice.value = s.loadCodecChoice()
_recentRooms.value = s.loadRecentRooms() _recentRooms.value = s.loadRecentRooms()
} }
@@ -218,43 +203,35 @@ class CallViewModel : ViewModel(), WzpCallback {
settings?.saveSelectedServer(_selectedServer.value) settings?.saveSelectedServer(_selectedServer.value)
} }
/** /** Ping all servers in background, update results. */
* Ping all servers via native QUIC. Requires engine to be initialized.
* Creates engine if needed, pings, keeps engine alive for subsequent Connect.
*/
fun pingAllServers() { fun pingAllServers() {
viewModelScope.launch { viewModelScope.launch {
// Ensure engine exists
if (engine == null || engine?.isInitialized != true) {
try {
engine = WzpEngine(this@CallViewModel).also { it.init() }
engineInitialized = true
} catch (e: Exception) {
Log.w(TAG, "engine init for ping failed: $e")
return@launch
}
}
val eng = engine ?: return@launch
val results = mutableMapOf<String, PingResult>() val results = mutableMapOf<String, PingResult>()
val known = mutableMapOf<String, String>() val known = mutableMapOf<String, String>()
_servers.value.forEach { server -> _servers.value.forEach { server ->
val json = withContext(Dispatchers.IO) { val pr = withContext(Dispatchers.IO) {
eng.pingRelay(server.address)
}
if (json != null) {
try { try {
val json = WzpEngine.pingRelay(server.address) ?: return@withContext null
val obj = JSONObject(json) val obj = JSONObject(json)
val rtt = obj.getInt("rtt_ms") PingResult(
val fp = obj.optString("server_fingerprint", "") rttMs = obj.getInt("rtt_ms"),
results[server.address] = PingResult(rttMs = rtt, serverFingerprint = fp) serverFingerprint = obj.optString("server_fingerprint", ""),
// TOFU )
if (fp.isNotEmpty()) { } catch (e: Exception) {
val saved = settings?.loadServerFingerprint(server.address) Log.w(TAG, "ping ${server.address} failed: ${e.message}")
if (saved == null) settings?.saveServerFingerprint(server.address, fp) null
known[server.address] = saved ?: fp }
}
if (pr != null) {
results[server.address] = pr
// TOFU: save fingerprint on first contact
if (pr.serverFingerprint.isNotEmpty()) {
val saved = settings?.loadServerFingerprint(server.address)
if (saved == null) {
settings?.saveServerFingerprint(server.address, pr.serverFingerprint)
}
known[server.address] = saved ?: pr.serverFingerprint
} }
} catch (_: Exception) {}
} }
} }
_pingResults.value = results _pingResults.value = results
@@ -262,23 +239,12 @@ class CallViewModel : ViewModel(), WzpCallback {
} }
} }
/** Load saved TOFU fingerprints. */
fun loadSavedFingerprints() {
val known = mutableMapOf<String, String>()
_servers.value.forEach { server ->
settings?.loadServerFingerprint(server.address)?.let {
known[server.address] = it
}
}
_knownFingerprints.value = known
}
/** Get lock status for a server. */ /** Get lock status for a server. */
fun lockStatus(address: String): LockStatus { fun lockStatus(address: String): LockStatus {
val pr = _pingResults.value[address] ?: return LockStatus.UNKNOWN val pr = _pingResults.value[address] ?: return LockStatus.UNKNOWN
if (!pr.reachable) return LockStatus.OFFLINE val known = _knownFingerprints.value[address]
val known = _knownFingerprints.value[address] ?: return LockStatus.NEW
if (pr.serverFingerprint.isEmpty()) return LockStatus.NEW if (pr.serverFingerprint.isEmpty()) return LockStatus.NEW
if (known == null) return LockStatus.NEW
return if (pr.serverFingerprint == known) LockStatus.VERIFIED else LockStatus.CHANGED return if (pr.serverFingerprint == known) LockStatus.VERIFIED else LockStatus.CHANGED
} }
@@ -314,16 +280,6 @@ class CallViewModel : ViewModel(), WzpCallback {
settings?.saveAecEnabled(enabled) settings?.saveAecEnabled(enabled)
} }
fun setDebugRecording(enabled: Boolean) {
_debugRecording.value = enabled
settings?.saveDebugRecording(enabled)
}
fun setCodecChoice(choice: Int) {
_codecChoice.value = choice
settings?.saveCodecChoice(choice)
}
/** /**
* Resolve DNS hostname to IP address on the Kotlin/Android side, * Resolve DNS hostname to IP address on the Kotlin/Android side,
* since Rust's DNS resolution may not work on Android. * since Rust's DNS resolution may not work on Android.
@@ -390,35 +346,7 @@ class CallViewModel : ViewModel(), WzpCallback {
Log.i(TAG, "teardown: done") Log.i(TAG, "teardown: done")
} }
/** Accept the new server key and proceed with the call. */
fun acceptNewFingerprint() {
val info = _keyWarning.value ?: return
_knownFingerprints.value = _knownFingerprints.value.toMutableMap().also {
it[info.address] = info.newFp
}
settings?.saveServerFingerprint(info.address, info.newFp)
_keyWarning.value = null
startCallInternal()
}
fun dismissKeyWarning() {
_keyWarning.value = null
}
fun startCall() { fun startCall() {
val serverEntry = _servers.value[_selectedServer.value]
// Check for key change before connecting
val ls = lockStatus(serverEntry.address)
if (ls == LockStatus.CHANGED) {
val known = _knownFingerprints.value[serverEntry.address] ?: ""
val current = _pingResults.value[serverEntry.address]?.serverFingerprint ?: ""
_keyWarning.value = KeyWarningInfo(serverEntry.address, known, current)
return
}
startCallInternal()
}
private fun startCallInternal() {
val serverEntry = _servers.value[_selectedServer.value] val serverEntry = _servers.value[_selectedServer.value]
val room = _roomName.value val room = _roomName.value
Log.i(TAG, "startCall: server=${serverEntry.address} room=$room") Log.i(TAG, "startCall: server=${serverEntry.address} room=$room")
@@ -449,7 +377,7 @@ class CallViewModel : ViewModel(), WzpCallback {
val seed = _seedHex.value val seed = _seedHex.value
val name = _alias.value val name = _alias.value
Log.i(TAG, "startCall: resolved=$relay, alias=$name, calling engine.startCall") Log.i(TAG, "startCall: resolved=$relay, alias=$name, calling engine.startCall")
val result = engine?.startCall(relay, room, seedHex = seed, alias = name, profile = _codecChoice.value) ?: -1 val result = engine?.startCall(relay, room, seedHex = seed, alias = name) ?: -1
Log.i(TAG, "startCall: engine returned $result") Log.i(TAG, "startCall: engine returned $result")
// Only wire up notification callback after engine is running // Only wire up notification callback after engine is running
CallService.onStopFromNotification = { stopCall() } CallService.onStopFromNotification = { stopCall() }
@@ -540,7 +468,6 @@ class CallViewModel : ViewModel(), WzpCallback {
it.playoutGainDb = _playoutGainDb.value it.playoutGainDb = _playoutGainDb.value
it.captureGainDb = _captureGainDb.value it.captureGainDb = _captureGainDb.value
it.aecEnabled = _aecEnabled.value it.aecEnabled = _aecEnabled.value
it.debugRecording = _debugRecording.value
it.start(e) it.start(e)
} }
audioRouteManager?.register() audioRouteManager?.register()

View File

@@ -89,60 +89,9 @@ fun InCallScreen(
val pingResults by viewModel.pingResults.collectAsState() val pingResults by viewModel.pingResults.collectAsState()
var showManageRelays by remember { mutableStateOf(false) } var showManageRelays by remember { mutableStateOf(false) }
val keyWarning by viewModel.keyWarning.collectAsState()
// Key-change warning dialog // Auto-ping on first display
keyWarning?.let { info -> LaunchedEffect(Unit) { viewModel.pingAllServers() }
AlertDialog(
onDismissRequest = { viewModel.dismissKeyWarning() },
title = {
Column(horizontalAlignment = Alignment.CenterHorizontally, modifier = Modifier.fillMaxWidth()) {
Text("\u26A0\uFE0F", fontSize = 40.sp)
Spacer(modifier = Modifier.height(8.dp))
Text("Server Key Changed", fontWeight = FontWeight.Bold)
}
},
text = {
Column {
Text(
"The relay's identity has changed since you last connected. " +
"This usually happens when the server was restarted.",
style = MaterialTheme.typography.bodySmall,
color = MaterialTheme.colorScheme.onSurfaceVariant
)
Spacer(modifier = Modifier.height(12.dp))
Text("Previously known", style = MaterialTheme.typography.labelSmall, color = MaterialTheme.colorScheme.onSurfaceVariant)
Text(info.oldFp, fontFamily = FontFamily.Monospace, style = MaterialTheme.typography.bodySmall)
Spacer(modifier = Modifier.height(8.dp))
Text("New key", style = MaterialTheme.typography.labelSmall, color = MaterialTheme.colorScheme.onSurfaceVariant)
Text(info.newFp, fontFamily = FontFamily.Monospace, style = MaterialTheme.typography.bodySmall)
}
},
confirmButton = {
Button(
onClick = { viewModel.acceptNewFingerprint() },
colors = ButtonDefaults.buttonColors(containerColor = Color(0xFFFACC15))
) {
Text("Accept New Key", color = Color.Black, fontWeight = FontWeight.Bold)
}
},
dismissButton = {
TextButton(onClick = { viewModel.dismissKeyWarning() }) {
Text("Cancel")
}
}
)
}
// Ping once on launch, then every 5 minutes
LaunchedEffect(Unit) {
viewModel.loadSavedFingerprints()
viewModel.pingAllServers()
while (true) {
kotlinx.coroutines.delay(300_000) // 5 minutes
viewModel.pingAllServers()
}
}
Surface( Surface(
modifier = Modifier.fillMaxSize(), modifier = Modifier.fillMaxSize(),
@@ -485,7 +434,6 @@ fun InCallScreen(
onSelect = { idx -> viewModel.selectServer(idx) }, onSelect = { idx -> viewModel.selectServer(idx) },
onDelete = { idx -> viewModel.removeServer(idx) }, onDelete = { idx -> viewModel.removeServer(idx) },
onAdd = { addr, label -> viewModel.addServer(addr, label) }, onAdd = { addr, label -> viewModel.addServer(addr, label) },
onRefresh = { viewModel.pingAllServers() },
onDismiss = { showManageRelays = false } onDismiss = { showManageRelays = false }
) )
} }
@@ -514,7 +462,6 @@ private fun ManageRelaysDialog(
onSelect: (Int) -> Unit, onSelect: (Int) -> Unit,
onDelete: (Int) -> Unit, onDelete: (Int) -> Unit,
onAdd: (String, String) -> Unit, onAdd: (String, String) -> Unit,
onRefresh: () -> Unit,
onDismiss: () -> Unit onDismiss: () -> Unit
) { ) {
var addName by remember { mutableStateOf("") } var addName by remember { mutableStateOf("") }
@@ -530,17 +477,6 @@ private fun ManageRelaysDialog(
verticalAlignment = Alignment.CenterVertically verticalAlignment = Alignment.CenterVertically
) { ) {
Text("Manage Relays", color = Color.White, fontWeight = FontWeight.Bold) Text("Manage Relays", color = Color.White, fontWeight = FontWeight.Bold)
Row(horizontalArrangement = Arrangement.spacedBy(6.dp)) {
Surface(
onClick = onRefresh,
shape = RoundedCornerShape(8.dp),
color = DarkSurface2,
modifier = Modifier.size(32.dp)
) {
Box(contentAlignment = Alignment.Center) {
Text("\u21BB", color = TextDim, fontSize = 16.sp)
}
}
Surface( Surface(
onClick = onDismiss, onClick = onDismiss,
shape = RoundedCornerShape(8.dp), shape = RoundedCornerShape(8.dp),
@@ -552,7 +488,6 @@ private fun ManageRelaysDialog(
} }
} }
} }
}
}, },
text = { text = {
Column { Column {
@@ -604,17 +539,13 @@ private fun ManageRelaysDialog(
) )
} }
} }
Spacer(modifier = Modifier.width(4.dp)) Spacer(modifier = Modifier.width(8.dp))
Surface( Text(
onClick = { onDelete(idx) }, "\u00D7",
shape = RoundedCornerShape(4.dp), color = TextDim,
color = Color.Transparent, fontSize = 18.sp,
modifier = Modifier.size(32.dp) modifier = Modifier.clickable { onDelete(idx) }
) { )
Box(contentAlignment = Alignment.Center) {
Text("\u00D7", color = TextDim, fontSize = 18.sp)
}
}
} }
} }
} }

View File

@@ -1,6 +1,5 @@
package com.wzp.ui.settings package com.wzp.ui.settings
import androidx.compose.foundation.clickable
import android.content.ClipData import android.content.ClipData
import android.content.ClipboardManager import android.content.ClipboardManager
import android.content.Context import android.content.Context
@@ -23,7 +22,6 @@ import androidx.compose.material3.AlertDialog
import androidx.compose.material3.Button import androidx.compose.material3.Button
import androidx.compose.material3.ButtonDefaults import androidx.compose.material3.ButtonDefaults
import androidx.compose.material3.Divider import androidx.compose.material3.Divider
import androidx.compose.material3.RadioButton
import androidx.compose.material3.FilledTonalButton import androidx.compose.material3.FilledTonalButton
import androidx.compose.material3.FilledTonalIconButton import androidx.compose.material3.FilledTonalIconButton
import androidx.compose.material3.IconButtonDefaults import androidx.compose.material3.IconButtonDefaults
@@ -243,51 +241,6 @@ fun SettingsScreen(
) )
} }
Spacer(modifier = Modifier.height(12.dp))
// Quality selection — slider from best (studio 64k) to worst (codec2 1.2k) + auto
val qualityLabels = listOf(
"Studio 64k", "Studio 48k", "Studio 32k", "Auto",
"Opus 24k", "Opus 6k", "Codec2 3.2k", "Codec2 1.2k"
)
// Map slider position to JNI profile int:
// 0=Studio64k(6), 1=Studio48k(5), 2=Studio32k(4), 3=Auto(7),
// 4=Opus24k(0), 5=Opus6k(1), 6=Codec2_3.2k(3), 7=Codec2_1.2k(2)
val sliderToProfile = intArrayOf(6, 5, 4, 7, 0, 1, 3, 2)
val profileToSlider = mapOf(6 to 0, 5 to 1, 4 to 2, 7 to 3, 0 to 4, 1 to 5, 3 to 6, 2 to 7)
val qualityColors = listOf(
Color(0xFF22C55E), Color(0xFF4ADE80), Color(0xFF86EFAC), Color(0xFFA3E635),
Color(0xFFA3E635), Color(0xFFFACC15), Color(0xFFE97320), Color(0xFF991B1B)
)
val currentCodec by viewModel.codecChoice.collectAsState()
val sliderPos = profileToSlider[currentCodec] ?: 3
Text("Quality", style = MaterialTheme.typography.bodyMedium)
Text(
text = "Decode always accepts all codecs",
style = MaterialTheme.typography.bodySmall,
color = MaterialTheme.colorScheme.onSurfaceVariant
)
Spacer(modifier = Modifier.height(4.dp))
Text(
text = qualityLabels[sliderPos],
style = MaterialTheme.typography.titleMedium.copy(fontWeight = FontWeight.Bold),
color = qualityColors[sliderPos]
)
Slider(
value = sliderPos.toFloat(),
onValueChange = { viewModel.setCodecChoice(sliderToProfile[it.toInt()]) },
valueRange = 0f..7f,
steps = 6,
modifier = Modifier.fillMaxWidth()
)
Row(
modifier = Modifier.fillMaxWidth(),
horizontalArrangement = Arrangement.SpaceBetween
) {
Text("Best", style = MaterialTheme.typography.labelSmall, color = Color(0xFF22C55E))
Text("Lowest", style = MaterialTheme.typography.labelSmall, color = Color(0xFF991B1B))
}
Spacer(modifier = Modifier.height(24.dp)) Spacer(modifier = Modifier.height(24.dp))
Divider() Divider()
Spacer(modifier = Modifier.height(16.dp)) Spacer(modifier = Modifier.height(16.dp))

View File

@@ -17,7 +17,6 @@ wzp-crypto = { workspace = true }
wzp-transport = { workspace = true } wzp-transport = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
tracing = { workspace = true } tracing = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
bytes = { workspace = true } bytes = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
serde_json = "1" serde_json = "1"
@@ -28,7 +27,9 @@ libc = "0.2"
jni = { version = "0.21", default-features = false } jni = { version = "0.21", default-features = false }
rand = { workspace = true } rand = { workspace = true }
rustls = { version = "0.23", default-features = false, features = ["ring"] } rustls = { version = "0.23", default-features = false, features = ["ring"] }
tracing-android = "0.2" android_logger = "0.14"
log = "0.4"
tracing-log = "0.2"
[build-dependencies] [build-dependencies]
cc = "1" cc = "1"

View File

@@ -16,6 +16,8 @@ use std::time::Instant;
use bytes::Bytes; use bytes::Bytes;
use tracing::{error, info, warn}; use tracing::{error, info, warn};
use wzp_codec::agc::AutoGainControl; use wzp_codec::agc::AutoGainControl;
use wzp_codec::opus_dec::OpusDecoder;
use wzp_codec::opus_enc::OpusEncoder;
use wzp_crypto::{KeyExchange, WarzoneKeyExchange}; use wzp_crypto::{KeyExchange, WarzoneKeyExchange};
use wzp_fec::{RaptorQFecDecoder, RaptorQFecEncoder}; use wzp_fec::{RaptorQFecDecoder, RaptorQFecEncoder};
use wzp_proto::{ use wzp_proto::{
@@ -27,19 +29,12 @@ use crate::audio_ring::AudioRing;
use crate::commands::EngineCommand; use crate::commands::EngineCommand;
use crate::stats::{CallState, CallStats}; use crate::stats::{CallState, CallStats};
/// Max frame size at 48kHz mono (40ms = 1920 samples, for Codec2/Opus6k). /// Opus frame size at 48kHz mono, 20ms = 960 samples.
const MAX_FRAME_SAMPLES: usize = 1920; const FRAME_SAMPLES: usize = 960;
/// Compute frame samples at 48kHz for a given profile.
fn frame_samples_for(profile: &QualityProfile) -> usize {
(profile.frame_duration_ms as usize) * 48 // 48000 / 1000
}
/// Configuration to start a call. /// Configuration to start a call.
pub struct CallStartConfig { pub struct CallStartConfig {
pub profile: QualityProfile, pub profile: QualityProfile,
/// When true, use the relay's chosen_profile from CallAnswer instead of local profile.
pub auto_profile: bool,
pub relay_addr: String, pub relay_addr: String,
pub room: String, pub room: String,
pub auth_token: Vec<u8>, pub auth_token: Vec<u8>,
@@ -51,7 +46,6 @@ impl Default for CallStartConfig {
fn default() -> Self { fn default() -> Self {
Self { Self {
profile: QualityProfile::GOOD, profile: QualityProfile::GOOD,
auto_profile: false,
relay_addr: String::new(), relay_addr: String::new(),
room: String::new(), room: String::new(),
auth_token: Vec::new(), auth_token: Vec::new(),
@@ -129,7 +123,6 @@ impl WzpEngine {
let room = config.room.clone(); let room = config.room.clone();
let identity_seed = config.identity_seed; let identity_seed = config.identity_seed;
let profile = config.profile; let profile = config.profile;
let auto_profile = config.auto_profile;
let alias = config.alias.clone(); let alias = config.alias.clone();
let state = self.state.clone(); let state = self.state.clone();
@@ -138,7 +131,7 @@ impl WzpEngine {
let state_clone = state.clone(); let state_clone = state.clone();
runtime.block_on(async move { runtime.block_on(async move {
if let Err(e) = run_call(relay_addr, &room, &identity_seed, profile, auto_profile, alias.as_deref(), state_clone).await if let Err(e) = run_call(relay_addr, &room, &identity_seed, profile, alias.as_deref(), state_clone).await
{ {
error!("call failed: {e}"); error!("call failed: {e}");
} }
@@ -176,53 +169,6 @@ impl WzpEngine {
info!("stop_call: done"); info!("stop_call: done");
} }
/// Ping a relay — same pattern as start_call (creates runtime on calling thread).
/// Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or error.
pub fn ping_relay(&self, address: &str) -> Result<String, anyhow::Error> {
let addr: SocketAddr = address.parse()?;
let _ = rustls::crypto::ring::default_provider().install_default();
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()?;
let result = rt.block_on(async {
let bind: SocketAddr = "0.0.0.0:0".parse().unwrap();
let endpoint = wzp_transport::create_endpoint(bind, None)?;
let client_cfg = wzp_transport::client_config();
let start = Instant::now();
let conn_result = tokio::time::timeout(
std::time::Duration::from_secs(3),
wzp_transport::connect(&endpoint, addr, "ping", client_cfg),
)
.await;
// Always close endpoint to prevent resource leaks
endpoint.close(0u32.into(), b"done");
let conn = conn_result.map_err(|_| anyhow::anyhow!("timeout"))??;
let rtt_ms = start.elapsed().as_millis() as u64;
let server_fp = conn
.peer_identity()
.and_then(|id| id.downcast::<Vec<rustls::pki_types::CertificateDer>>().ok())
.and_then(|certs| certs.first().map(|c| {
use std::hash::{Hash, Hasher};
let mut h = std::collections::hash_map::DefaultHasher::new();
c.as_ref().hash(&mut h);
format!("{:016x}", h.finish())
}))
.unwrap_or_default();
conn.close(0u32.into(), b"ping");
Ok::<_, anyhow::Error>(format!(r#"{{"rtt_ms":{},"server_fingerprint":"{}"}}"#, rtt_ms, server_fp))
});
// Shutdown runtime cleanly with timeout
rt.shutdown_timeout(std::time::Duration::from_millis(500));
result
}
pub fn set_mute(&self, muted: bool) { pub fn set_mute(&self, muted: bool) {
self.state.muted.store(muted, Ordering::Relaxed); self.state.muted.store(muted, Ordering::Relaxed);
} }
@@ -281,7 +227,6 @@ async fn run_call(
room: &str, room: &str,
identity_seed: &[u8; 32], identity_seed: &[u8; 32],
profile: QualityProfile, profile: QualityProfile,
auto_profile: bool,
alias: Option<&str>, alias: Option<&str>,
state: Arc<EngineState>, state: Arc<EngineState>,
) -> Result<(), anyhow::Error> { ) -> Result<(), anyhow::Error> {
@@ -316,9 +261,6 @@ async fn run_call(
ephemeral_pub, ephemeral_pub,
signature, signature,
supported_profiles: vec![ supported_profiles: vec![
QualityProfile::STUDIO_64K,
QualityProfile::STUDIO_48K,
QualityProfile::STUDIO_32K,
QualityProfile::GOOD, QualityProfile::GOOD,
QualityProfile::DEGRADED, QualityProfile::DEGRADED,
QualityProfile::CATASTROPHIC, QualityProfile::CATASTROPHIC,
@@ -333,8 +275,8 @@ async fn run_call(
.await? .await?
.ok_or_else(|| anyhow::anyhow!("connection closed before CallAnswer"))?; .ok_or_else(|| anyhow::anyhow!("connection closed before CallAnswer"))?;
let (relay_ephemeral_pub, chosen_profile) = match answer { let relay_ephemeral_pub = match answer {
SignalMessage::CallAnswer { ephemeral_pub, chosen_profile, .. } => (ephemeral_pub, chosen_profile), SignalMessage::CallAnswer { ephemeral_pub, .. } => ephemeral_pub,
other => { other => {
return Err(anyhow::anyhow!( return Err(anyhow::anyhow!(
"expected CallAnswer, got {:?}", "expected CallAnswer, got {:?}",
@@ -343,25 +285,19 @@ async fn run_call(
} }
}; };
// Auto mode: use the relay's chosen profile instead of the local preference
let profile = if auto_profile {
info!(chosen = ?chosen_profile.codec, "auto mode: using relay's chosen profile");
chosen_profile
} else {
profile
};
let _session = kx.derive_session(&relay_ephemeral_pub)?; let _session = kx.derive_session(&relay_ephemeral_pub)?;
info!(codec = ?profile.codec, "handshake complete, call active"); info!("handshake complete, call active");
{ {
let mut stats = state.stats.lock().unwrap(); let mut stats = state.stats.lock().unwrap();
stats.state = CallState::Active; stats.state = CallState::Active;
} }
// Initialize codec (Opus or Codec2 based on profile) // Initialize Opus codec
let mut encoder = wzp_codec::create_encoder(profile); let mut encoder =
let mut decoder = wzp_codec::create_decoder(profile); OpusEncoder::new(profile).map_err(|e| anyhow::anyhow!("opus encoder init: {e}"))?;
let mut decoder =
OpusDecoder::new(profile).map_err(|e| anyhow::anyhow!("opus decoder init: {e}"))?;
// Initialize FEC encoder/decoder // Initialize FEC encoder/decoder
let mut fec_enc = wzp_fec::create_encoder(&profile); let mut fec_enc = wzp_fec::create_encoder(&profile);
@@ -371,22 +307,18 @@ async fn run_call(
let mut capture_agc = AutoGainControl::new(); let mut capture_agc = AutoGainControl::new();
let mut playout_agc = AutoGainControl::new(); let mut playout_agc = AutoGainControl::new();
let frame_samples = frame_samples_for(&profile);
info!( info!(
codec = ?profile.codec,
fec_ratio = profile.fec_ratio, fec_ratio = profile.fec_ratio,
frames_per_block = profile.frames_per_block, frames_per_block = profile.frames_per_block,
frame_ms = profile.frame_duration_ms, "codec + FEC + AGC initialized (48kHz mono, 20ms frames)"
frame_samples,
"codec + FEC + AGC initialized"
); );
let seq = AtomicU16::new(0); let seq = AtomicU16::new(0);
let ts = AtomicU32::new(0); let ts = AtomicU32::new(0);
let transport_recv = transport.clone(); let transport_recv = transport.clone();
// Pre-allocate buffers (sized for current profile) // Pre-allocate buffers
let mut capture_buf = vec![0i16; frame_samples]; let mut capture_buf = vec![0i16; FRAME_SAMPLES];
let mut encode_buf = vec![0u8; encoder.max_frame_bytes()]; let mut encode_buf = vec![0u8; encoder.max_frame_bytes()];
let mut frame_in_block: u8 = 0; let mut frame_in_block: u8 = 0;
let mut block_id: u8 = 0; let mut block_id: u8 = 0;
@@ -416,13 +348,13 @@ async fn run_call(
} }
let avail = state.capture_ring.available(); let avail = state.capture_ring.available();
if avail < frame_samples { if avail < FRAME_SAMPLES {
tokio::time::sleep(std::time::Duration::from_millis(5)).await; tokio::time::sleep(std::time::Duration::from_millis(5)).await;
continue; continue;
} }
let read = state.capture_ring.read(&mut capture_buf); let read = state.capture_ring.read(&mut capture_buf);
if read < frame_samples { if read < FRAME_SAMPLES {
continue; continue;
} }
@@ -451,7 +383,7 @@ async fn run_call(
// Build source packet // Build source packet
let s = seq.fetch_add(1, Ordering::Relaxed); let s = seq.fetch_add(1, Ordering::Relaxed);
let t = ts.fetch_add(frame_samples as u32, Ordering::Relaxed); let t = ts.fetch_add(FRAME_SAMPLES as u32, Ordering::Relaxed);
let source_pkt = MediaPacket { let source_pkt = MediaPacket {
header: MediaHeader { header: MediaHeader {
@@ -579,8 +511,8 @@ async fn run_call(
info!(frames_sent, frames_dropped, send_errors, "send task ended"); info!(frames_sent, frames_dropped, send_errors, "send task ended");
}; };
// Pre-allocate decode buffer (max size to handle any incoming codec) // Pre-allocate decode buffer
let mut decode_buf = vec![0i16; MAX_FRAME_SAMPLES]; let mut decode_buf = vec![0i16; FRAME_SAMPLES];
// Recv task: MediaPackets → FEC decode → Opus decode → playout ring // Recv task: MediaPackets → FEC decode → Opus decode → playout ring
let recv_task = async { let recv_task = async {
@@ -625,27 +557,7 @@ async fn run_call(
); );
// Source packets: decode directly // Source packets: decode directly
if !is_repair && pkt.header.codec_id != CodecId::ComfortNoise { if !is_repair {
// Switch decoder to match incoming codec if different
if pkt.header.codec_id != decoder.codec_id() {
let switch_profile = match pkt.header.codec_id {
CodecId::Opus24k => QualityProfile::GOOD,
CodecId::Opus6k => QualityProfile::DEGRADED,
CodecId::Opus32k => QualityProfile::STUDIO_32K,
CodecId::Opus48k => QualityProfile::STUDIO_48K,
CodecId::Opus64k => QualityProfile::STUDIO_64K,
CodecId::Codec2_1200 => QualityProfile::CATASTROPHIC,
CodecId::Codec2_3200 => QualityProfile {
codec: CodecId::Codec2_3200,
fec_ratio: 0.5,
frame_duration_ms: 20,
frames_per_block: 5,
},
other => QualityProfile { codec: other, ..QualityProfile::GOOD },
};
info!(from = ?decoder.codec_id(), to = ?pkt.header.codec_id, "recv: switching decoder");
let _ = decoder.set_profile(switch_profile);
}
match decoder.decode(&pkt.payload, &mut decode_buf) { match decoder.decode(&pkt.payload, &mut decode_buf) {
Ok(samples) => { Ok(samples) => {
playout_agc.process_frame(&mut decode_buf[..samples]); playout_agc.process_frame(&mut decode_buf[..samples]);

View File

@@ -21,24 +21,11 @@ unsafe fn handle_ref(handle: jlong) -> &'static mut EngineHandle {
unsafe { &mut *(handle as *mut EngineHandle) } unsafe { &mut *(handle as *mut EngineHandle) }
} }
/// 7 = auto (use relay's chosen profile)
const PROFILE_AUTO: jint = 7;
fn profile_from_int(value: jint) -> QualityProfile { fn profile_from_int(value: jint) -> QualityProfile {
match value { match value {
0 => QualityProfile::GOOD, // Opus 24k 1 => QualityProfile::DEGRADED,
1 => QualityProfile::DEGRADED, // Opus 6k 2 => QualityProfile::CATASTROPHIC,
2 => QualityProfile::CATASTROPHIC, // Codec2 1.2k _ => QualityProfile::GOOD,
3 => QualityProfile { // Codec2 3.2k
codec: wzp_proto::CodecId::Codec2_3200,
fec_ratio: 0.5,
frame_duration_ms: 20,
frames_per_block: 5,
},
4 => QualityProfile::STUDIO_32K, // Opus 32k
5 => QualityProfile::STUDIO_48K, // Opus 48k
6 => QualityProfile::STUDIO_64K, // Opus 64k
_ => QualityProfile::GOOD, // auto falls back to GOOD
} }
} }
@@ -48,24 +35,17 @@ static INIT_LOGGING: Once = Once::new();
/// Safe to call multiple times — only the first call takes effect. /// Safe to call multiple times — only the first call takes effect.
fn init_logging() { fn init_logging() {
INIT_LOGGING.call_once(|| { INIT_LOGGING.call_once(|| {
// Wrap in catch_unwind — sharded_slab allocation inside // Use android_logger directly — tracing_subscriber::registry() allocates
// tracing_subscriber::registry() can crash on some Android // a sharded_slab which causes SIGSEGV on Android 16 MTE devices.
// devices if scudo malloc fails during early initialization. // android_logger is lightweight and doesn't trigger scudo crashes.
let _ = std::panic::catch_unwind(|| { let _ = std::panic::catch_unwind(|| {
use tracing_subscriber::layer::SubscriberExt; android_logger::init_once(
use tracing_subscriber::util::SubscriberInitExt; android_logger::Config::default()
use tracing_subscriber::EnvFilter; .with_max_level(log::LevelFilter::Info)
if let Ok(layer) = tracing_android::layer("wzp_android") { .with_tag("wzp"),
// Filter: INFO for our crates, WARN for everything else. );
// The jni crate emits VERBOSE logs for every method lookup // Bridge tracing → log so our tracing::info! macros work
// (~10 lines per JNI call, 100+ calls/sec) which floods logcat let _ = tracing_log::LogTracer::init();
// and causes the system to kill the app.
let filter = EnvFilter::new("warn,wzp_android=info,wzp_proto=info,wzp_transport=info,wzp_codec=info,wzp_fec=info,wzp_crypto=info");
let _ = tracing_subscriber::registry()
.with(layer)
.with(filter)
.try_init();
}
}); });
}); });
} }
@@ -98,7 +78,6 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeStartCall(
seed_hex_j: JString, seed_hex_j: JString,
token_j: JString, token_j: JString,
alias_j: JString, alias_j: JString,
profile_j: jint,
) -> jint { ) -> jint {
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| { let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
let relay_addr: String = env.get_string(&relay_addr_j).map(|s| s.into()).unwrap_or_default(); let relay_addr: String = env.get_string(&relay_addr_j).map(|s| s.into()).unwrap_or_default();
@@ -124,8 +103,7 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeStartCall(
} }
let config = CallStartConfig { let config = CallStartConfig {
profile: profile_from_int(profile_j), profile: QualityProfile::GOOD,
auto_profile: profile_j == PROFILE_AUTO,
relay_addr, relay_addr,
room, room,
auth_token: if token.is_empty() { Vec::new() } else { token.into_bytes() }, auth_token: if token.is_empty() { Vec::new() } else { token.into_bytes() },
@@ -333,22 +311,71 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeDestroy(
})); }));
} }
/// Ping a relay server — instance method, requires engine handle. /// Ping a relay server — returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or null on failure.
/// Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or null on failure. /// Does NOT require an engine handle — creates a temporary QUIC connection.
#[unsafe(no_mangle)] #[unsafe(no_mangle)]
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativePingRelay<'a>( pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativePingRelay<'a>(
mut env: JNIEnv<'a>, mut env: JNIEnv<'a>,
_class: JClass, _class: JClass,
handle: jlong,
relay_j: JString, relay_j: JString,
) -> jstring { ) -> jstring {
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| { let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
let h = unsafe { handle_ref(handle) };
let relay: String = env.get_string(&relay_j).map(|s| s.into()).unwrap_or_default(); let relay: String = env.get_string(&relay_j).map(|s| s.into()).unwrap_or_default();
match h.engine.ping_relay(&relay) { let addr: std::net::SocketAddr = match relay.parse() {
Ok(json) => Some(json), Ok(a) => a,
Err(_) => None, Err(_) => return None,
};
let _ = rustls::crypto::ring::default_provider().install_default();
let rt = match tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
{
Ok(rt) => rt,
Err(_) => return None,
};
rt.block_on(async {
let bind: std::net::SocketAddr = "0.0.0.0:0".parse().unwrap();
let endpoint = match wzp_transport::create_endpoint(bind, None) {
Ok(e) => e,
Err(_) => return None,
};
let client_cfg = wzp_transport::client_config();
let start = std::time::Instant::now();
match tokio::time::timeout(
std::time::Duration::from_secs(3),
wzp_transport::connect(&endpoint, addr, "ping", client_cfg),
)
.await
{
Ok(Ok(conn)) => {
let rtt_ms = start.elapsed().as_millis() as u64;
let server_fp = conn
.peer_identity()
.and_then(|id| {
id.downcast::<Vec<rustls::pki_types::CertificateDer>>().ok()
})
.and_then(|certs| {
certs.first().map(|c| {
use std::hash::{Hash, Hasher};
let mut h = std::collections::hash_map::DefaultHasher::new();
c.as_ref().hash(&mut h);
format!("{:016x}", h.finish())
})
})
.unwrap_or_default();
conn.close(0u32.into(), b"ping");
Some(format!(
r#"{{"rtt_ms":{},"server_fingerprint":"{}"}}"#,
rtt_ms, server_fp
))
} }
_ => None,
}
})
})); }));
let json = match result { let json = match result {

View File

@@ -110,9 +110,6 @@ pub fn signal_to_call_type(signal: &SignalMessage) -> CallSignalType {
SignalMessage::SessionForward { .. } => CallSignalType::Offer, // reuse SignalMessage::SessionForward { .. } => CallSignalType::Offer, // reuse
SignalMessage::SessionForwardAck { .. } => CallSignalType::Offer, // reuse SignalMessage::SessionForwardAck { .. } => CallSignalType::Offer, // reuse
SignalMessage::RoomUpdate { .. } => CallSignalType::Offer, // reuse SignalMessage::RoomUpdate { .. } => CallSignalType::Offer, // reuse
SignalMessage::FederationRoomJoin { .. }
| SignalMessage::FederationRoomLeave { .. }
| SignalMessage::FederationParticipantUpdate { .. } => CallSignalType::Offer, // relay-only
} }
} }

View File

@@ -38,9 +38,6 @@ pub async fn perform_handshake(
ephemeral_pub, ephemeral_pub,
signature, signature,
supported_profiles: vec![ supported_profiles: vec![
QualityProfile::STUDIO_64K,
QualityProfile::STUDIO_48K,
QualityProfile::STUDIO_32K,
QualityProfile::GOOD, QualityProfile::GOOD,
QualityProfile::DEGRADED, QualityProfile::DEGRADED,
QualityProfile::CATASTROPHIC, QualityProfile::CATASTROPHIC,

View File

@@ -79,7 +79,7 @@ impl AudioDecoder for OpusDecoder {
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> { fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
match profile.codec { match profile.codec {
c if c.is_opus() => { CodecId::Opus24k | CodecId::Opus16k | CodecId::Opus6k => {
self.codec_id = profile.codec; self.codec_id = profile.codec;
self.frame_duration_ms = profile.frame_duration_ms; self.frame_duration_ms = profile.frame_duration_ms;
Ok(()) Ok(())

View File

@@ -100,7 +100,7 @@ impl AudioEncoder for OpusEncoder {
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> { fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
match profile.codec { match profile.codec {
c if c.is_opus() => { CodecId::Opus24k | CodecId::Opus16k | CodecId::Opus6k => {
self.codec_id = profile.codec; self.codec_id = profile.codec;
self.frame_duration_ms = profile.frame_duration_ms; self.frame_duration_ms = profile.frame_duration_ms;
self.apply_bitrate(profile.codec)?; self.apply_bitrate(profile.codec)?;

View File

@@ -18,12 +18,6 @@ pub enum CodecId {
Codec2_1200 = 4, Codec2_1200 = 4,
/// Comfort noise descriptor (silence suppression) /// Comfort noise descriptor (silence suppression)
ComfortNoise = 5, ComfortNoise = 5,
/// Opus at 32kbps (studio low)
Opus32k = 6,
/// Opus at 48kbps (studio)
Opus48k = 7,
/// Opus at 64kbps (studio high)
Opus64k = 8,
} }
impl CodecId { impl CodecId {
@@ -33,9 +27,6 @@ impl CodecId {
Self::Opus24k => 24_000, Self::Opus24k => 24_000,
Self::Opus16k => 16_000, Self::Opus16k => 16_000,
Self::Opus6k => 6_000, Self::Opus6k => 6_000,
Self::Opus32k => 32_000,
Self::Opus48k => 48_000,
Self::Opus64k => 64_000,
Self::Codec2_3200 => 3_200, Self::Codec2_3200 => 3_200,
Self::Codec2_1200 => 1_200, Self::Codec2_1200 => 1_200,
Self::ComfortNoise => 0, Self::ComfortNoise => 0,
@@ -45,7 +36,8 @@ impl CodecId {
/// Preferred frame duration in milliseconds. /// Preferred frame duration in milliseconds.
pub const fn frame_duration_ms(self) -> u8 { pub const fn frame_duration_ms(self) -> u8 {
match self { match self {
Self::Opus24k | Self::Opus16k | Self::Opus32k | Self::Opus48k | Self::Opus64k => 20, Self::Opus24k => 20,
Self::Opus16k => 20,
Self::Opus6k => 40, Self::Opus6k => 40,
Self::Codec2_3200 => 20, Self::Codec2_3200 => 20,
Self::Codec2_1200 => 40, Self::Codec2_1200 => 40,
@@ -56,8 +48,7 @@ impl CodecId {
/// Sample rate expected by this codec. /// Sample rate expected by this codec.
pub const fn sample_rate_hz(self) -> u32 { pub const fn sample_rate_hz(self) -> u32 {
match self { match self {
Self::Opus24k | Self::Opus16k | Self::Opus6k Self::Opus24k | Self::Opus16k | Self::Opus6k => 48_000,
| Self::Opus32k | Self::Opus48k | Self::Opus64k => 48_000,
Self::Codec2_3200 | Self::Codec2_1200 => 8_000, Self::Codec2_3200 | Self::Codec2_1200 => 8_000,
Self::ComfortNoise => 48_000, Self::ComfortNoise => 48_000,
} }
@@ -72,9 +63,6 @@ impl CodecId {
3 => Some(Self::Codec2_3200), 3 => Some(Self::Codec2_3200),
4 => Some(Self::Codec2_1200), 4 => Some(Self::Codec2_1200),
5 => Some(Self::ComfortNoise), 5 => Some(Self::ComfortNoise),
6 => Some(Self::Opus32k),
7 => Some(Self::Opus48k),
8 => Some(Self::Opus64k),
_ => None, _ => None,
} }
} }
@@ -83,12 +71,6 @@ impl CodecId {
pub const fn to_wire(self) -> u8 { pub const fn to_wire(self) -> u8 {
self as u8 self as u8
} }
/// Returns true if this is an Opus variant.
pub const fn is_opus(self) -> bool {
matches!(self, Self::Opus6k | Self::Opus16k | Self::Opus24k
| Self::Opus32k | Self::Opus48k | Self::Opus64k)
}
} }
/// Describes the complete quality configuration for a call session. /// Describes the complete quality configuration for a call session.
@@ -129,30 +111,6 @@ impl QualityProfile {
frames_per_block: 8, frames_per_block: 8,
}; };
/// Studio low: Opus 32kbps, minimal FEC.
pub const STUDIO_32K: Self = Self {
codec: CodecId::Opus32k,
fec_ratio: 0.1,
frame_duration_ms: 20,
frames_per_block: 5,
};
/// Studio: Opus 48kbps, minimal FEC.
pub const STUDIO_48K: Self = Self {
codec: CodecId::Opus48k,
fec_ratio: 0.1,
frame_duration_ms: 20,
frames_per_block: 5,
};
/// Studio high: Opus 64kbps, minimal FEC.
pub const STUDIO_64K: Self = Self {
codec: CodecId::Opus64k,
fec_ratio: 0.1,
frame_duration_ms: 20,
frames_per_block: 5,
};
/// Estimated total bandwidth in kbps including FEC overhead. /// Estimated total bandwidth in kbps including FEC overhead.
pub fn total_bitrate_kbps(&self) -> f32 { pub fn total_bitrate_kbps(&self) -> f32 {
let base = self.codec.bitrate_bps() as f32 / 1000.0; let base = self.codec.bitrate_bps() as f32 / 1000.0;

View File

@@ -656,25 +656,6 @@ pub enum SignalMessage {
/// List of participants currently in the room. /// List of participants currently in the room.
participants: Vec<RoomParticipant>, participants: Vec<RoomParticipant>,
}, },
// ── Federation signals (relay-to-relay) ──
/// Federation: a room exists on the sending relay with active local participants.
FederationRoomJoin {
room: String,
participants: Vec<RoomParticipant>,
},
/// Federation: a room is now empty on the sending relay.
FederationRoomLeave {
room: String,
},
/// Federation: local participant list changed for a federated room.
FederationParticipantUpdate {
room: String,
participants: Vec<RoomParticipant>,
},
} }
/// A participant entry in a RoomUpdate message. /// A participant entry in a RoomUpdate message.

View File

@@ -28,8 +28,6 @@ prometheus = "0.13"
axum = { version = "0.7", default-features = false, features = ["tokio", "http1", "ws"] } axum = { version = "0.7", default-features = false, features = ["tokio", "http1", "ws"] }
tower-http = { version = "0.6", features = ["fs"] } tower-http = { version = "0.6", features = ["fs"] }
futures-util = "0.3" futures-util = "0.3"
dirs = "6"
sha2 = { workspace = true }
[[bin]] [[bin]]
name = "wzp-relay" name = "wzp-relay"

View File

@@ -3,24 +3,8 @@
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::net::SocketAddr; use std::net::SocketAddr;
/// A federated peer relay.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct PeerConfig {
/// Address of the peer relay (e.g., "193.180.213.68:4433").
pub url: String,
/// Expected TLS certificate fingerprint (hex, with colons).
pub fingerprint: String,
/// Optional human-readable label.
#[serde(default)]
pub label: Option<String>,
}
/// Configuration for the relay daemon. /// Configuration for the relay daemon.
///
/// All fields have defaults, so a minimal TOML file only needs the
/// fields you want to override (e.g., just `[[peers]]`).
#[derive(Clone, Debug, Serialize, Deserialize)] #[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct RelayConfig { pub struct RelayConfig {
/// Address to listen on for incoming connections (client-facing). /// Address to listen on for incoming connections (client-facing).
pub listen_addr: SocketAddr, pub listen_addr: SocketAddr,
@@ -60,9 +44,6 @@ pub struct RelayConfig {
pub ws_port: Option<u16>, pub ws_port: Option<u16>,
/// Directory to serve static files from (HTML/JS/WASM for web clients). /// Directory to serve static files from (HTML/JS/WASM for web clients).
pub static_dir: Option<String>, pub static_dir: Option<String>,
/// Federation peer relays.
#[serde(default)]
pub peers: Vec<PeerConfig>,
} }
impl Default for RelayConfig { impl Default for RelayConfig {
@@ -81,14 +62,6 @@ impl Default for RelayConfig {
trunking_enabled: false, trunking_enabled: false,
ws_port: None, ws_port: None,
static_dir: None, static_dir: None,
peers: Vec::new(),
} }
} }
} }
/// Load relay configuration from a TOML file.
pub fn load_config(path: &str) -> Result<RelayConfig, anyhow::Error> {
let content = std::fs::read_to_string(path)?;
let config: RelayConfig = toml::from_str(&content)?;
Ok(config)
}

View File

@@ -1,284 +0,0 @@
//! Relay federation — connects to peer relays and bridges rooms with matching names.
//!
//! Each federated peer is represented as a virtual participant in shared rooms.
//! Media from local participants is forwarded to the peer via room-tagged datagrams.
//! Media from the peer is received, demuxed by room hash, and forwarded to local participants.
use std::collections::HashMap;
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::Duration;
use bytes::Bytes;
use sha2::{Sha256, Digest};
use tokio::sync::Mutex;
use tracing::{error, info, warn};
use wzp_proto::{MediaTransport, SignalMessage};
use wzp_transport::QuinnTransport;
use crate::config::PeerConfig;
use crate::room::{self, ParticipantSender, RoomManager};
/// Compute 8-byte room hash for federation datagram tagging.
pub fn room_hash(room_name: &str) -> [u8; 8] {
let h = Sha256::digest(room_name.as_bytes());
let mut out = [0u8; 8];
out.copy_from_slice(&h[..8]);
out
}
/// Manages federation connections to peer relays.
pub struct FederationManager {
peers: Vec<PeerConfig>,
room_mgr: Arc<Mutex<RoomManager>>,
endpoint: quinn::Endpoint,
local_tls_fp: String,
}
impl FederationManager {
pub fn new(
peers: Vec<PeerConfig>,
room_mgr: Arc<Mutex<RoomManager>>,
endpoint: quinn::Endpoint,
local_tls_fp: String,
) -> Self {
Self {
peers,
room_mgr,
endpoint,
local_tls_fp,
}
}
/// Start federation — spawns one task per configured peer.
pub async fn run(self: Arc<Self>) {
if self.peers.is_empty() {
return;
}
info!(peers = self.peers.len(), "federation starting");
let mut handles = Vec::new();
for peer in &self.peers {
let this = self.clone();
let peer = peer.clone();
handles.push(tokio::spawn(async move {
run_peer_loop(this, peer).await;
}));
}
for h in handles {
let _ = h.await;
}
}
/// Handle an inbound federation connection from a peer that we recognize.
pub async fn handle_inbound(
self: &Arc<Self>,
transport: Arc<QuinnTransport>,
peer_config: PeerConfig,
) {
let addr: SocketAddr = peer_config.url.parse().unwrap_or_else(|_| "0.0.0.0:0".parse().unwrap());
info!(peer = ?peer_config.label, %addr, "inbound federation link active");
if let Err(e) = run_federation_link(self.clone(), transport, addr, &peer_config).await {
warn!(peer = ?peer_config.label, "inbound federation link ended: {e}");
}
}
/// Find a configured peer by TLS fingerprint.
pub fn find_peer_by_fingerprint(&self, fp: &str) -> Option<&PeerConfig> {
self.peers.iter().find(|p| normalize_fp(&p.fingerprint) == normalize_fp(fp))
}
}
/// Normalize a fingerprint string (remove colons, lowercase).
fn normalize_fp(fp: &str) -> String {
fp.replace(':', "").to_lowercase()
}
/// Persistent connection loop for one peer — reconnects with backoff.
async fn run_peer_loop(fm: Arc<FederationManager>, peer: PeerConfig) {
let mut backoff = Duration::from_secs(5);
loop {
info!(peer_url = %peer.url, label = ?peer.label, "federation: connecting to peer...");
match connect_to_peer(&fm, &peer).await {
Ok(transport) => {
backoff = Duration::from_secs(5); // reset on success
let addr: SocketAddr = peer.url.parse().unwrap_or_else(|_| "0.0.0.0:0".parse().unwrap());
if let Err(e) = run_federation_link(fm.clone(), transport, addr, &peer).await {
warn!(peer_url = %peer.url, "federation link ended: {e}");
}
}
Err(e) => {
warn!(peer_url = %peer.url, backoff_s = backoff.as_secs(), "federation connect failed: {e}");
}
}
tokio::time::sleep(backoff).await;
backoff = (backoff * 2).min(Duration::from_secs(300));
}
}
/// Connect to a peer relay.
async fn connect_to_peer(fm: &FederationManager, peer: &PeerConfig) -> Result<Arc<QuinnTransport>, anyhow::Error> {
let addr: SocketAddr = peer.url.parse()?;
let client_cfg = wzp_transport::client_config();
let conn = wzp_transport::connect(&fm.endpoint, addr, "_federation", client_cfg).await?;
// TODO: verify peer TLS fingerprint once we have cert access
let transport = Arc::new(QuinnTransport::new(conn));
info!(peer_url = %peer.url, label = ?peer.label, "federation: connected to peer");
Ok(transport)
}
/// Run the federation link: exchange room info and forward media.
async fn run_federation_link(
fm: Arc<FederationManager>,
transport: Arc<QuinnTransport>,
peer_addr: SocketAddr,
peer: &PeerConfig,
) -> Result<(), anyhow::Error> {
// Announce our active rooms to the peer
let rooms = {
let mgr = fm.room_mgr.lock().await;
mgr.active_rooms()
};
for room_name in &rooms {
let participants = {
let mgr = fm.room_mgr.lock().await;
mgr.local_participants(room_name)
};
let msg = SignalMessage::FederationRoomJoin {
room: room_name.clone(),
participants,
};
transport.send_signal(&msg).await?;
}
// Track virtual participants we create on behalf of this peer
let mut peer_room_participants: HashMap<String, room::ParticipantId> = HashMap::new();
// Map room_hash -> room_name for incoming media demux
let mut hash_to_room: HashMap<[u8; 8], String> = HashMap::new();
// Run two tasks: recv signals + recv media datagrams
let signal_transport = transport.clone();
let media_transport = transport.clone();
let fm_signal = fm.clone();
let fm_media = fm.clone();
let peer_label = peer.label.clone().unwrap_or_else(|| peer.url.clone());
let signal_task = async move {
loop {
match signal_transport.recv_signal().await {
Ok(Some(msg)) => {
match msg {
SignalMessage::FederationRoomJoin { room, participants } => {
info!(peer = %peer_label, room = %room, count = participants.len(), "federation: peer room join");
let rh = room_hash(&room);
hash_to_room.insert(rh, room.clone());
let sender = ParticipantSender::Federation {
transport: signal_transport.clone(),
room_hash: rh,
};
let (pid, update, senders) = {
let mut mgr = fm_signal.room_mgr.lock().await;
mgr.join_federated(&room, peer_addr, sender, participants)
};
peer_room_participants.insert(room, pid);
room::broadcast_signal(&senders, &update).await;
}
SignalMessage::FederationRoomLeave { room } => {
info!(peer = %peer_label, room = %room, "federation: peer room leave");
if let Some(pid) = peer_room_participants.remove(&room) {
let result = {
let mut mgr = fm_signal.room_mgr.lock().await;
mgr.leave(&room, pid)
};
if let Some((update, senders)) = result {
room::broadcast_signal(&senders, &update).await;
}
}
hash_to_room.retain(|_, v| v != &room);
}
SignalMessage::FederationParticipantUpdate { room, participants } => {
let result = {
let mut mgr = fm_signal.room_mgr.lock().await;
mgr.update_federated_participants(&room, peer_addr, participants)
};
if let Some((update, senders)) = result {
room::broadcast_signal(&senders, &update).await;
}
}
_ => {} // ignore other signals
}
}
Ok(None) => break,
Err(e) => {
error!(peer = %peer_label, "federation signal recv error: {e}");
break;
}
}
}
// Cleanup: remove all virtual participants for this peer
for (room, pid) in &peer_room_participants {
let result = {
let mut mgr = fm_signal.room_mgr.lock().await;
mgr.leave(room, *pid)
};
if let Some((update, senders)) = result {
room::broadcast_signal(&senders, &update).await;
}
}
info!(peer = %peer_label, "federation signal task ended");
};
let media_task = async move {
loop {
match media_transport.connection().read_datagram().await {
Ok(data) => {
if data.len() < 8 + 4 {
continue; // too short (need room_hash + min header)
}
let mut rh = [0u8; 8];
rh.copy_from_slice(&data[..8]);
let media_bytes = &data[8..];
// Deserialize media packet
let pkt = match wzp_proto::MediaPacket::from_bytes(Bytes::copy_from_slice(media_bytes)) {
Some(pkt) => pkt,
None => continue,
};
// Look up room by hash — we need to get the room name from the signal task's hash_to_room
// For simplicity, we forward to all local participants via the room manager
// The virtual participant approach means we don't need the room name here —
// the SFU loop handles it. But since inbound media doesn't go through run_participant,
// we need to manually fan out.
// For now, just use the room manager to find local participants
// This is a simplified approach — full implementation would maintain
// a shared hash_to_room map between signal and media tasks
let mgr = fm_media.room_mgr.lock().await;
for room_name in mgr.active_rooms() {
if room_hash(&room_name) == rh {
// Forward to all local participants in this room
let locals: Vec<_> = mgr.local_senders(&room_name);
drop(mgr); // release lock before sending
for sender in &locals {
if let ParticipantSender::Quic(t) = sender {
let _ = t.send_media(&pkt).await;
}
}
break;
}
}
}
Err(_) => break,
}
}
};
tokio::select! {
_ = signal_task => {}
_ = media_task => {}
}
Ok(())
}

View File

@@ -9,7 +9,6 @@
pub mod auth; pub mod auth;
pub mod config; pub mod config;
pub mod federation;
pub mod handshake; pub mod handshake;
pub mod metrics; pub mod metrics;
pub mod pipeline; pub mod pipeline;

View File

@@ -13,7 +13,7 @@ use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use tokio::sync::Mutex; use tokio::sync::Mutex;
use tracing::{error, info, warn}; use tracing::{error, info};
use wzp_proto::MediaTransport; use wzp_proto::MediaTransport;
use wzp_relay::config::RelayConfig; use wzp_relay::config::RelayConfig;
@@ -24,34 +24,11 @@ use wzp_relay::room::{self, RoomManager};
use wzp_relay::session_mgr::SessionManager; use wzp_relay::session_mgr::SessionManager;
fn parse_args() -> RelayConfig { fn parse_args() -> RelayConfig {
let mut config = RelayConfig::default();
let args: Vec<String> = std::env::args().collect(); let args: Vec<String> = std::env::args().collect();
// Check for --config first to use as base
let mut config_file = None;
let mut i = 1;
while i < args.len() {
if args[i] == "--config" {
i += 1;
config_file = args.get(i).cloned();
}
i += 1;
}
let mut config = if let Some(ref path) = config_file {
wzp_relay::config::load_config(path)
.unwrap_or_else(|e| {
eprintln!("failed to load config from {path}: {e}");
std::process::exit(1);
})
} else {
RelayConfig::default()
};
// CLI flags override config file values
let mut i = 1; let mut i = 1;
while i < args.len() { while i < args.len() {
match args[i].as_str() { match args[i].as_str() {
"--config" => { i += 1; } // already handled
"--listen" => { "--listen" => {
i += 1; i += 1;
config.listen_addr = args.get(i).expect("--listen requires an address") config.listen_addr = args.get(i).expect("--listen requires an address")
@@ -113,10 +90,9 @@ fn parse_args() -> RelayConfig {
std::process::exit(0); std::process::exit(0);
} }
"--help" | "-h" => { "--help" | "-h" => {
eprintln!("Usage: wzp-relay [--config <path>] [--listen <addr>] [--remote <addr>] [--auth-url <url>] [--metrics-port <port>] [--probe <addr>]... [--probe-mesh] [--mesh-status]"); eprintln!("Usage: wzp-relay [--listen <addr>] [--remote <addr>] [--auth-url <url>] [--metrics-port <port>] [--probe <addr>]... [--probe-mesh] [--mesh-status]");
eprintln!(); eprintln!();
eprintln!("Options:"); eprintln!("Options:");
eprintln!(" --config <path> Load configuration from TOML file (peers, listen, etc.)");
eprintln!(" --listen <addr> Listen address (default: 0.0.0.0:4433)"); eprintln!(" --listen <addr> Listen address (default: 0.0.0.0:4433)");
eprintln!(" --remote <addr> Remote relay for forwarding (disables room mode)"); eprintln!(" --remote <addr> Remote relay for forwarding (disables room mode)");
eprintln!(" --auth-url <url> featherChat auth endpoint (e.g., https://chat.example.com/v1/auth/validate)"); eprintln!(" --auth-url <url> featherChat auth endpoint (e.g., https://chat.example.com/v1/auth/validate)");
@@ -208,21 +184,6 @@ async fn run_downstream(
} }
} }
/// Detect a non-loopback IP address from local interfaces.
/// Prefers public IPs over private (10.x, 172.16-31.x, 192.168.x).
fn detect_public_ip() -> Option<String> {
use std::net::UdpSocket;
// Connect to a public address to find our outbound IP (doesn't actually send anything)
if let Ok(socket) = UdpSocket::bind("0.0.0.0:0") {
if socket.connect("8.8.8.8:80").is_ok() {
if let Ok(addr) = socket.local_addr() {
return Some(addr.ip().to_string());
}
}
}
None
}
#[tokio::main] #[tokio::main]
async fn main() -> anyhow::Result<()> { async fn main() -> anyhow::Result<()> {
let config = parse_args(); let config = parse_args();
@@ -246,63 +207,12 @@ async fn main() -> anyhow::Result<()> {
tokio::spawn(wzp_relay::metrics::serve_metrics(port, m, p, rr)); tokio::spawn(wzp_relay::metrics::serve_metrics(port, m, p, rr));
} }
// Load or generate relay identity — persisted in ~/.wzp/relay-identity // Generate ephemeral relay identity for crypto handshake
let relay_seed = { let relay_seed = wzp_crypto::Seed::generate();
let config_dir = dirs::home_dir()
.unwrap_or_else(|| std::path::PathBuf::from("."))
.join(".wzp");
let identity_path = config_dir.join("relay-identity");
if identity_path.exists() {
if let Ok(hex) = std::fs::read_to_string(&identity_path) {
if let Ok(s) = wzp_crypto::Seed::from_hex(hex.trim()) {
info!("loaded relay identity from {}", identity_path.display());
s
} else {
warn!("corrupt relay identity file, generating new");
let s = wzp_crypto::Seed::generate();
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
let _ = std::fs::write(&identity_path, &hex);
s
}
} else {
let s = wzp_crypto::Seed::generate();
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
let _ = std::fs::write(&identity_path, &hex);
s
}
} else {
let s = wzp_crypto::Seed::generate();
let _ = std::fs::create_dir_all(&config_dir);
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
let _ = std::fs::write(&identity_path, &hex);
info!("generated relay identity at {}", identity_path.display());
s
}
};
let relay_fp = relay_seed.derive_identity().public_identity().fingerprint; let relay_fp = relay_seed.derive_identity().public_identity().fingerprint;
info!(addr = %config.listen_addr, fingerprint = %relay_fp, "WarzonePhone relay starting"); info!(addr = %config.listen_addr, fingerprint = %relay_fp, "WarzonePhone relay starting");
let (server_config, cert_der) = wzp_transport::server_config_from_seed(&relay_seed.0); let (server_config, _cert) = wzp_transport::server_config();
let tls_fp = wzp_transport::tls_fingerprint(&cert_der);
info!(tls_fingerprint = %tls_fp, "TLS certificate (deterministic from relay identity)");
// Print federation hint with our public IP + listen port + TLS fingerprint
let listen_port = config.listen_addr.port();
let public_ip = detect_public_ip();
if let Some(ip) = &public_ip {
info!("federation: to peer with this relay, add to relay.toml:");
info!(" [[peers]]");
info!(" url = \"{ip}:{listen_port}\"");
info!(" fingerprint = \"{tls_fp}\"");
}
// Log configured peers
if !config.peers.is_empty() {
info!(count = config.peers.len(), "federation peers configured");
for p in &config.peers {
info!(url = %p.url, label = ?p.label, " peer");
}
}
let endpoint = wzp_transport::create_endpoint(config.listen_addr, Some(server_config))?; let endpoint = wzp_transport::create_endpoint(config.listen_addr, Some(server_config))?;
// Forward mode // Forward mode
@@ -320,21 +230,6 @@ async fn main() -> anyhow::Result<()> {
// Room manager (room mode only) // Room manager (room mode only)
let room_mgr = Arc::new(Mutex::new(RoomManager::new())); let room_mgr = Arc::new(Mutex::new(RoomManager::new()));
// Federation manager
let federation_mgr = if !config.peers.is_empty() {
let fm = Arc::new(wzp_relay::federation::FederationManager::new(
config.peers.clone(),
room_mgr.clone(),
endpoint.clone(),
tls_fp.clone(),
));
let fm_run = fm.clone();
tokio::spawn(async move { fm_run.run().await });
Some(fm)
} else {
None
};
// Session manager — enforces max concurrent sessions // Session manager — enforces max concurrent sessions
let session_mgr = Arc::new(Mutex::new(SessionManager::new(config.max_sessions))); let session_mgr = Arc::new(Mutex::new(SessionManager::new(config.max_sessions)));
@@ -390,7 +285,6 @@ async fn main() -> anyhow::Result<()> {
let trunking_enabled = config.trunking_enabled; let trunking_enabled = config.trunking_enabled;
let presence = presence.clone(); let presence = presence.clone();
let route_resolver = route_resolver.clone(); let route_resolver = route_resolver.clone();
let federation_mgr = federation_mgr.clone();
tokio::spawn(async move { tokio::spawn(async move {
let addr = connection.remote_address(); let addr = connection.remote_address();
@@ -405,13 +299,6 @@ async fn main() -> anyhow::Result<()> {
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection)); let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
// Ping connections: client just measures QUIC connect RTT.
// No handshake, no streams — client closes immediately after connecting.
if room_name == "ping" {
info!(%addr, "ping connection (RTT probe)");
return;
}
// Probe connections use SNI "_probe" to identify themselves. // Probe connections use SNI "_probe" to identify themselves.
// They skip auth + handshake and just do Ping->Pong + presence gossip. // They skip auth + handshake and just do Ping->Pong + presence gossip.
if room_name == "_probe" { if room_name == "_probe" {
@@ -498,38 +385,6 @@ async fn main() -> anyhow::Result<()> {
return; return;
} }
// Federation connections use SNI "_federation"
if room_name == "_federation" {
if let Some(ref fm) = federation_mgr {
// Check if we recognize this peer by TLS fingerprint
let peer_fp = wzp_transport::tls_fingerprint(
&transport.connection()
.peer_identity()
.and_then(|id| id.downcast::<Vec<rustls::pki_types::CertificateDer>>().ok())
.and_then(|certs| certs.first().cloned())
.map(|c| c.to_vec())
.unwrap_or_default()
);
if let Some(peer_config) = fm.find_peer_by_fingerprint(&peer_fp) {
let peer_config = peer_config.clone();
let fm = fm.clone();
info!(%addr, label = ?peer_config.label, "inbound federation connection accepted");
fm.handle_inbound(transport, peer_config).await;
} else {
warn!(%addr, "unknown relay wants to federate");
info!(" to accept, add to relay.toml:");
info!(" [[peers]]");
info!(" url = \"{addr}\"");
info!(" fingerprint = \"{peer_fp}\"");
transport.close().await.ok();
}
} else {
info!(%addr, "federation connection rejected (no peers configured)");
transport.close().await.ok();
}
return;
}
// Auth check: if --auth-url is set, expect first signal message to be a token // Auth check: if --auth-url is set, expect first signal message to be a token
// Auth: if --auth-url is set, expect AuthToken as first signal // Auth: if --auth-url is set, expect AuthToken as first signal
let authenticated_fp: Option<String> = if let Some(ref url) = auth_url { let authenticated_fp: Option<String> = if let Some(ref url) = auth_url {

View File

@@ -27,25 +27,11 @@ fn next_id() -> ParticipantId {
NEXT_PARTICIPANT_ID.fetch_add(1, Ordering::Relaxed) NEXT_PARTICIPANT_ID.fetch_add(1, Ordering::Relaxed)
} }
/// Tracks where a participant originates from (for loop prevention).
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum ParticipantOrigin {
/// Connected directly to this relay.
Local,
/// Virtual participant representing a federated peer relay.
Federated { relay_addr: std::net::SocketAddr },
}
/// How to send data to a participant — either via QUIC transport or WebSocket channel. /// How to send data to a participant — either via QUIC transport or WebSocket channel.
#[derive(Clone)] #[derive(Clone)]
pub enum ParticipantSender { pub enum ParticipantSender {
Quic(Arc<wzp_transport::QuinnTransport>), Quic(Arc<wzp_transport::QuinnTransport>),
WebSocket(tokio::sync::mpsc::Sender<Bytes>), WebSocket(tokio::sync::mpsc::Sender<Bytes>),
/// Federated peer relay — media is prefixed with an 8-byte room hash.
Federation {
transport: Arc<wzp_transport::QuinnTransport>,
room_hash: [u8; 8],
},
} }
impl ParticipantSender { impl ParticipantSender {
@@ -64,14 +50,6 @@ impl ParticipantSender {
}; };
transport.send_media(&pkt).await.map_err(|e| format!("quic send: {e}")) transport.send_media(&pkt).await.map_err(|e| format!("quic send: {e}"))
} }
ParticipantSender::Federation { transport, room_hash } => {
// Prefix media data with room hash for demuxing on the peer relay
let mut tagged = Vec::with_capacity(8 + data.len());
tagged.extend_from_slice(room_hash);
tagged.extend_from_slice(data);
transport.send_raw_datagram(&tagged)
.map_err(|e| format!("federation send: {e}"))
}
} }
} }
@@ -107,21 +85,17 @@ struct Participant {
sender: ParticipantSender, sender: ParticipantSender,
fingerprint: Option<String>, fingerprint: Option<String>,
alias: Option<String>, alias: Option<String>,
origin: ParticipantOrigin,
} }
/// A room holding multiple participants. /// A room holding multiple participants.
struct Room { struct Room {
participants: Vec<Participant>, participants: Vec<Participant>,
/// Remote participants from federated peers (for merged RoomUpdate).
federated_participants: HashMap<std::net::SocketAddr, Vec<wzp_proto::packet::RoomParticipant>>,
} }
impl Room { impl Room {
fn new() -> Self { fn new() -> Self {
Self { Self {
participants: Vec::new(), participants: Vec::new(),
federated_participants: HashMap::new(),
} }
} }
@@ -131,11 +105,10 @@ impl Room {
sender: ParticipantSender, sender: ParticipantSender,
fingerprint: Option<String>, fingerprint: Option<String>,
alias: Option<String>, alias: Option<String>,
origin: ParticipantOrigin,
) -> ParticipantId { ) -> ParticipantId {
let id = next_id(); let id = next_id();
info!(room_size = self.participants.len() + 1, participant = id, %addr, ?origin, "joined room"); info!(room_size = self.participants.len() + 1, participant = id, %addr, "joined room");
self.participants.push(Participant { id, _addr: addr, sender, fingerprint, alias, origin }); self.participants.push(Participant { id, _addr: addr, sender, fingerprint, alias });
id id
} }
@@ -152,38 +125,15 @@ impl Room {
.collect() .collect()
} }
/// Get senders with loop prevention for federation. /// Build a RoomUpdate participant list.
/// fn participant_list(&self) -> Vec<wzp_proto::packet::RoomParticipant> {
/// - Media from a **local** participant → send to ALL others (local + federated)
/// - Media from a **federated** participant → send to LOCAL participants only
/// (the source relay already forwarded to its own locals and other peers)
fn others_for_origin(&self, exclude_id: ParticipantId, source_origin: &ParticipantOrigin) -> Vec<ParticipantSender> {
self.participants self.participants
.iter() .iter()
.filter(|p| p.id != exclude_id)
.filter(|p| match source_origin {
ParticipantOrigin::Local => true,
ParticipantOrigin::Federated { .. } => p.origin == ParticipantOrigin::Local,
})
.map(|p| p.sender.clone())
.collect()
}
/// Build a RoomUpdate participant list (local + federated).
fn participant_list(&self) -> Vec<wzp_proto::packet::RoomParticipant> {
let mut list: Vec<_> = self.participants
.iter()
.filter(|p| p.origin == ParticipantOrigin::Local)
.map(|p| wzp_proto::packet::RoomParticipant { .map(|p| wzp_proto::packet::RoomParticipant {
fingerprint: p.fingerprint.clone().unwrap_or_default(), fingerprint: p.fingerprint.clone().unwrap_or_default(),
alias: p.alias.clone(), alias: p.alias.clone(),
}) })
.collect(); .collect()
// Merge federated participants from all peer relays
for remote in self.federated_participants.values() {
list.extend(remote.iter().cloned());
}
list
} }
/// Get all senders (for broadcasting to everyone including the joiner). /// Get all senders (for broadcasting to everyone including the joiner).
@@ -264,7 +214,7 @@ impl RoomManager {
return Err("not authorized for this room".to_string()); return Err("not authorized for this room".to_string());
} }
let room = self.rooms.entry(room_name.to_string()).or_insert_with(Room::new); let room = self.rooms.entry(room_name.to_string()).or_insert_with(Room::new);
let id = room.add(addr, sender, fingerprint.map(|s| s.to_string()), alias.map(|s| s.to_string()), ParticipantOrigin::Local); let id = room.add(addr, sender, fingerprint.map(|s| s.to_string()), alias.map(|s| s.to_string()));
let update = wzp_proto::SignalMessage::RoomUpdate { let update = wzp_proto::SignalMessage::RoomUpdate {
count: room.len() as u32, count: room.len() as u32,
participants: room.participant_list(), participants: room.participant_list(),
@@ -285,83 +235,6 @@ impl RoomManager {
Ok(id) Ok(id)
} }
/// Join a room as a federated virtual participant.
pub fn join_federated(
&mut self,
room_name: &str,
relay_addr: std::net::SocketAddr,
sender: ParticipantSender,
remote_participants: Vec<wzp_proto::packet::RoomParticipant>,
) -> (ParticipantId, wzp_proto::SignalMessage, Vec<ParticipantSender>) {
let room = self.rooms.entry(room_name.to_string()).or_insert_with(Room::new);
room.federated_participants.insert(relay_addr, remote_participants);
let id = room.add(
relay_addr, sender, None, Some("(federated)".to_string()),
ParticipantOrigin::Federated { relay_addr },
);
let update = wzp_proto::SignalMessage::RoomUpdate {
count: room.len() as u32,
participants: room.participant_list(),
};
let senders = room.all_senders();
(id, update, senders)
}
/// Update federated participant list for a room (from FederationParticipantUpdate).
pub fn update_federated_participants(
&mut self,
room_name: &str,
relay_addr: std::net::SocketAddr,
participants: Vec<wzp_proto::packet::RoomParticipant>,
) -> Option<(wzp_proto::SignalMessage, Vec<ParticipantSender>)> {
if let Some(room) = self.rooms.get_mut(room_name) {
room.federated_participants.insert(relay_addr, participants);
let update = wzp_proto::SignalMessage::RoomUpdate {
count: room.len() as u32,
participants: room.participant_list(),
};
let senders = room.all_senders();
Some((update, senders))
} else {
None
}
}
/// Get the origin of a participant by ID.
pub fn participant_origin(&self, room_name: &str, participant_id: ParticipantId) -> Option<ParticipantOrigin> {
self.rooms.get(room_name)
.and_then(|room| room.participants.iter().find(|p| p.id == participant_id))
.map(|p| p.origin.clone())
}
/// Get list of active room names (for federation room announcements).
pub fn active_rooms(&self) -> Vec<String> {
self.rooms.keys().cloned().collect()
}
/// Get local participant list for a room (excludes federated virtual participants).
pub fn local_participants(&self, room_name: &str) -> Vec<wzp_proto::packet::RoomParticipant> {
self.rooms.get(room_name)
.map(|room| room.participants.iter()
.filter(|p| p.origin == ParticipantOrigin::Local)
.map(|p| wzp_proto::packet::RoomParticipant {
fingerprint: p.fingerprint.clone().unwrap_or_default(),
alias: p.alias.clone(),
})
.collect())
.unwrap_or_default()
}
/// Get senders for local-only participants in a room (for federation inbound media).
pub fn local_senders(&self, room_name: &str) -> Vec<ParticipantSender> {
self.rooms.get(room_name)
.map(|room| room.participants.iter()
.filter(|p| p.origin == ParticipantOrigin::Local)
.map(|p| p.sender.clone())
.collect())
.unwrap_or_default()
}
/// Leave a room. Returns (room_update_msg, remaining_senders) for broadcasting, or None if room is now empty. /// Leave a room. Returns (room_update_msg, remaining_senders) for broadcasting, or None if room is now empty.
pub fn leave(&mut self, room_name: &str, participant_id: ParticipantId) -> Option<(wzp_proto::SignalMessage, Vec<ParticipantSender>)> { pub fn leave(&mut self, room_name: &str, participant_id: ParticipantId) -> Option<(wzp_proto::SignalMessage, Vec<ParticipantSender>)> {
if let Some(room) = self.rooms.get_mut(room_name) { if let Some(room) = self.rooms.get_mut(room_name) {
@@ -594,19 +467,6 @@ async fn run_participant_plain(
ParticipantSender::WebSocket(_) => { ParticipantSender::WebSocket(_) => {
let _ = other.send_raw(&pkt.payload).await; let _ = other.send_raw(&pkt.payload).await;
} }
ParticipantSender::Federation { transport, room_hash } => {
// Send room-tagged datagram to federated peer
let data = pkt.to_bytes();
let mut tagged = Vec::with_capacity(8 + data.len());
tagged.extend_from_slice(room_hash);
tagged.extend_from_slice(&data);
if let Err(e) = transport.send_raw_datagram(&tagged) {
send_errors += 1;
if send_errors <= 5 {
warn!(room = %room_name, "federation forward error: {e}");
}
}
}
} }
} }
let fwd_ms = fwd_start.elapsed().as_millis() as u64; let fwd_ms = fwd_start.elapsed().as_millis() as u64;
@@ -774,13 +634,6 @@ async fn run_participant_trunked(
ParticipantSender::WebSocket(_) => { ParticipantSender::WebSocket(_) => {
let _ = other.send_raw(&pkt.payload).await; let _ = other.send_raw(&pkt.payload).await;
} }
ParticipantSender::Federation { transport, room_hash } => {
let data = pkt.to_bytes();
let mut tagged = Vec::with_capacity(8 + data.len());
tagged.extend_from_slice(room_hash);
tagged.extend_from_slice(&data);
let _ = transport.send_raw_datagram(&tagged);
}
} }
} }
let fwd_ms = fwd_start.elapsed().as_millis() as u64; let fwd_ms = fwd_start.elapsed().as_millis() as u64;

View File

@@ -16,9 +16,6 @@ async-trait = { workspace = true }
serde_json = "1" serde_json = "1"
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] } rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
rcgen = "0.13" rcgen = "0.13"
ed25519-dalek = { workspace = true }
hkdf = { workspace = true }
sha2 = { workspace = true }
[dev-dependencies] [dev-dependencies]
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] } tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }

View File

@@ -6,74 +6,20 @@ use std::time::Duration;
use quinn::crypto::rustls::QuicClientConfig; use quinn::crypto::rustls::QuicClientConfig;
use quinn::crypto::rustls::QuicServerConfig; use quinn::crypto::rustls::QuicServerConfig;
/// Create a server configuration with a self-signed certificate (random keypair). /// Create a server configuration with a self-signed certificate (for testing).
/// ///
/// The certificate changes on every call. Use `server_config_from_seed` for /// Tunes QUIC transport parameters for lossy VoIP:
/// a deterministic certificate that survives relay restarts. /// - 30s idle timeout
/// - 5s keep-alive interval
/// - DATAGRAM extension enabled
/// - Conservative flow control for bandwidth-constrained links
pub fn server_config() -> (quinn::ServerConfig, Vec<u8>) { pub fn server_config() -> (quinn::ServerConfig, Vec<u8>) {
let cert_key = rcgen::generate_simple_self_signed(vec!["localhost".to_string()]) let cert_key = rcgen::generate_simple_self_signed(vec!["localhost".to_string()])
.expect("failed to generate self-signed cert"); .expect("failed to generate self-signed cert");
let cert_der = rustls::pki_types::CertificateDer::from(cert_key.cert); let cert_der = rustls::pki_types::CertificateDer::from(cert_key.cert);
let key_der = let key_der =
rustls::pki_types::PrivateKeyDer::try_from(cert_key.key_pair.serialize_der()).unwrap(); rustls::pki_types::PrivateKeyDer::try_from(cert_key.key_pair.serialize_der()).unwrap();
build_server_config(cert_der, key_der)
}
/// Create a server configuration with a deterministic self-signed certificate
/// derived from a 32-byte seed. Same seed = same cert = same TLS fingerprint.
pub fn server_config_from_seed(seed: &[u8; 32]) -> (quinn::ServerConfig, Vec<u8>) {
use ed25519_dalek::pkcs8::EncodePrivateKey;
use ed25519_dalek::SigningKey;
use hkdf::Hkdf;
use sha2::Sha256;
// Derive Ed25519 key bytes from seed via HKDF
let hk = Hkdf::<Sha256>::new(None, seed);
let mut ed_bytes = [0u8; 32];
hk.expand(b"wzp-tls-ed25519", &mut ed_bytes)
.expect("HKDF expand failed");
// Create Ed25519 signing key and export as PKCS8 DER
let signing_key = SigningKey::from_bytes(&ed_bytes);
let pkcs8_doc = signing_key.to_pkcs8_der()
.expect("failed to encode Ed25519 key as PKCS8");
let key_der_for_rcgen = rustls::pki_types::PrivateKeyDer::try_from(pkcs8_doc.as_bytes().to_vec())
.expect("failed to wrap PKCS8 DER");
// Create rcgen KeyPair from DER
let key_pair = rcgen::KeyPair::from_der_and_sign_algo(
&key_der_for_rcgen,
&rcgen::PKCS_ED25519,
)
.expect("failed to create KeyPair from seed-derived Ed25519 key");
// Build self-signed cert with this deterministic keypair
let params = rcgen::CertificateParams::new(vec!["localhost".to_string()])
.expect("failed to create CertificateParams");
let cert = params.self_signed(&key_pair).expect("failed to self-sign cert");
let cert_der = rustls::pki_types::CertificateDer::from(cert.der().to_vec());
let key_der = rustls::pki_types::PrivateKeyDer::try_from(key_pair.serialize_der())
.expect("failed to serialize key DER");
build_server_config(cert_der, key_der)
}
/// Compute a hex-formatted SHA-256 fingerprint of a DER-encoded certificate.
///
/// Format: `xx:xx:xx:xx:...` (32 bytes = 64 hex chars with colons).
pub fn tls_fingerprint(cert_der: &[u8]) -> String {
use sha2::{Sha256, Digest};
let hash = Sha256::digest(cert_der);
hash.iter()
.map(|b| format!("{b:02x}"))
.collect::<Vec<_>>()
.join(":")
}
fn build_server_config(
cert_der: rustls::pki_types::CertificateDer<'static>,
key_der: rustls::pki_types::PrivateKeyDer<'static>,
) -> (quinn::ServerConfig, Vec<u8>) {
let mut server_crypto = rustls::ServerConfig::builder() let mut server_crypto = rustls::ServerConfig::builder()
.with_no_client_auth() .with_no_client_auth()
.with_single_cert(vec![cert_der.clone()], key_der) .with_single_cert(vec![cert_der.clone()], key_der)

View File

@@ -22,7 +22,7 @@ pub mod path_monitor;
pub mod quic; pub mod quic;
pub mod reliable; pub mod reliable;
pub use config::{client_config, server_config, server_config_from_seed, tls_fingerprint}; pub use config::{client_config, server_config};
pub use connection::{accept, connect, create_endpoint}; pub use connection::{accept, connect, create_endpoint};
pub use path_monitor::PathMonitor; pub use path_monitor::PathMonitor;
pub use quic::QuinnTransport; pub use quic::QuinnTransport;

View File

@@ -33,13 +33,6 @@ impl QuinnTransport {
&self.connection &self.connection
} }
/// Send raw bytes as a QUIC datagram (no MediaPacket framing).
pub fn send_raw_datagram(&self, data: &[u8]) -> Result<(), TransportError> {
self.connection
.send_datagram(bytes::Bytes::copy_from_slice(data))
.map_err(|e| TransportError::Internal(format!("datagram: {e}")))
}
/// Close the QUIC connection immediately (synchronous, no async needed). /// Close the QUIC connection immediately (synchronous, no async needed).
/// The relay will detect the close and remove this participant from the room. /// The relay will detect the close and remove this participant from the room.
pub fn close_now(&self) { pub fn close_now(&self) {

View File

@@ -1,201 +0,0 @@
# PRD: Adaptive Quality Control (Auto Codec)
## Problem
When a user selects "Auto" quality, the system currently just starts at Opus 24k (GOOD) and never changes. There is no runtime adaptation — if the network degrades mid-call, audio breaks up instead of gracefully stepping down to a lower bitrate codec. Conversely, if the network is excellent, the user stays on 24k when they could have studio-quality 64k.
The relay already sends `QualityReport` messages with loss % and RTT, and a `QualityAdapter` exists in `call.rs` that classifies network conditions into GOOD/DEGRADED/CATASTROPHIC — but none of this is wired into the Android or desktop engines.
## Solution
Wire the existing `QualityAdapter` into both engines so that "Auto" mode continuously monitors network quality and switches codecs mid-call. The full quality range should be used:
```
Excellent network → Studio 64k (best quality)
Good network → Opus 24k (default)
Degraded network → Opus 6k (lower bitrate, more FEC)
Poor network → Codec2 3.2k (vocoder, heavy FEC)
Catastrophic → Codec2 1.2k (minimum viable voice)
```
## Architecture
```
┌─────────────────────┐
Relay ──────────► │ QualityReport │ loss %, RTT, jitter
│ (every ~1s) │
└────────┬────────────┘
┌─────────────────────┐
│ QualityAdapter │ classify + hysteresis
│ (3-report window) │
└────────┬────────────┘
│ recommend new profile
┌──────────────┴──────────────┐
│ │
▼ ▼
┌────────────────┐ ┌────────────────┐
│ Encoder │ │ Decoder │
│ set_profile() │ │ (auto-switch │
│ + FEC update │ │ already works)│
└────────────────┘ └────────────────┘
```
## Existing Infrastructure
### What already exists (in `crates/wzp-client/src/call.rs`)
1. **`QualityAdapter`** (lines 97-196):
- Sliding window of `QualityReport` messages
- `classify()`: loss > 15% or RTT > 200ms → CATASTROPHIC, loss > 5% or RTT > 100ms → DEGRADED, else → GOOD
- `should_switch()`: hysteresis — requires 3 consecutive reports recommending the same profile before switching
- Prevents oscillation between profiles
2. **`QualityReport`** (in `wzp-proto/src/packet.rs`):
- Sent by relay piggy-backed on media packets
- Fields: `loss_pct` (u8, 0-255 scaled), `rtt_4ms` (u8, RTT in 4ms units), `jitter_ms`, `bitrate_cap_kbps`
3. **`CallEncoder::set_profile()`** / **`CallDecoder` auto-switch**:
- Encoder can switch codec mid-stream
- Decoder already auto-detects incoming codec from packet headers
### What's missing
1. **QualityReport ingestion** — neither Android engine nor desktop engine reads quality reports from the relay
2. **Profile switch loop** — no periodic check that feeds reports to `QualityAdapter` and applies recommended switches
3. **Upward adaptation**`QualityAdapter` only classifies into 3 tiers (GOOD/DEGRADED/CATASTROPHIC). Needs extension to recommend studio tiers when conditions are excellent (loss < 1%, RTT < 50ms)
4. **Notification to UI** — when quality changes, the UI should show the current active codec
## Requirements
### Phase 1: Basic Adaptive (3-tier)
**Both Android and Desktop:**
1. **Ingest QualityReports**: In the recv loop, extract `quality_report` from incoming `MediaPacket`s when present. Feed to `QualityAdapter`.
2. **Periodic quality check**: Every 1 second (or on each QualityReport), call `adapter.should_switch(&current_profile)`. If it returns `Some(new_profile)`:
- Switch the encoder: `encoder.set_profile(new_profile)`
- Update FEC encoder: `fec_enc = create_encoder(&new_profile)`
- Update frame size if changed (e.g., 20ms → 40ms)
- Log the switch
3. **Frame size adaptation on switch**: When switching from 20ms to 40ms frames (or vice versa):
- Android: update `frame_samples` variable, resize `capture_buf`
- Desktop: same — the send loop reads `frame_samples` dynamically
4. **UI indicator**: Show current active codec in the call screen stats line.
- Android: add to `CallStats` and display in stats text
- Desktop: add to `get_status` response and display in stats div
5. **Only in Auto mode**: Adaptive switching should only happen when the user selected "Auto". If they manually selected a profile, respect their choice.
### Phase 2: Extended Range (5-tier)
Extend `QualityAdapter::classify()` to use the full codec range:
| Condition | Profile | Codec |
|-----------|---------|-------|
| loss < 1% AND RTT < 30ms | STUDIO_64K | Opus 64k |
| loss < 1% AND RTT < 50ms | STUDIO_48K | Opus 48k |
| loss < 2% AND RTT < 80ms | STUDIO_32K | Opus 32k |
| loss < 5% AND RTT < 100ms | GOOD | Opus 24k |
| loss < 15% AND RTT < 200ms | DEGRADED | Opus 6k |
| loss >= 15% OR RTT >= 200ms | CATASTROPHIC | Codec2 1.2k |
With hysteresis:
- **Downgrade**: 3 consecutive reports (fast reaction to degradation)
- **Upgrade**: 5 consecutive reports (slow, cautious improvement)
- **Studio upgrade**: 10 consecutive reports (very conservative — avoid bouncing to 64k on brief good patches)
### Phase 3: Bandwidth Probing
Rather than relying solely on loss/RTT:
1. Start at GOOD
2. After 10 seconds of stable call, probe upward by switching to STUDIO_32K
3. If no quality degradation after 5 seconds, probe to STUDIO_48K
4. If degradation detected, immediately fall back
5. This discovers the true available bandwidth rather than guessing from loss stats
## Implementation Plan
### Android (`crates/wzp-android/src/engine.rs`)
```rust
// In the recv loop, after decoding:
if let Some(ref qr) = pkt.quality_report {
quality_adapter.ingest(qr);
}
// Periodic check (every 50 frames ≈ 1 second):
if auto_profile && frames_decoded % 50 == 0 {
if let Some(new_profile) = quality_adapter.should_switch(&current_profile) {
info!(from = ?current_profile.codec, to = ?new_profile.codec, "auto: switching quality");
let _ = encoder_ref.lock().set_profile(new_profile);
fec_enc_ref.lock() = create_encoder(&new_profile);
current_profile = new_profile;
frame_samples = frame_samples_for(&new_profile);
// Resize capture buffer if needed
}
}
```
**Challenge**: The encoder is in the send task and the quality reports arrive in the recv task. Need shared state (AtomicU8 for profile index, or a channel).
**Recommended approach**: Use an `AtomicU8` that the recv task writes and the send task reads:
```rust
let pending_profile = Arc::new(AtomicU8::new(0xFF)); // 0xFF = no change
// Recv task: when adapter recommends switch
pending_profile.store(new_profile_index, Ordering::Release);
// Send task: check at frame boundary
let p = pending_profile.swap(0xFF, Ordering::Acquire);
if p != 0xFF { /* apply switch */ }
```
### Desktop (`desktop/src-tauri/src/engine.rs`)
Same pattern. The desktop engine already has separate send/recv tasks with shared atomics for mic_muted, etc. Add a `pending_profile: Arc<AtomicU8>` following the same pattern.
### Desktop CLI (`crates/wzp-client/src/call.rs`)
The `CallEncoder` already has `set_profile()`. The `CallDecoder` already auto-switches. Just need to:
1. Add `QualityAdapter` to `CallDecoder`
2. Feed quality reports in `ingest()`
3. Check `should_switch()` in `decode_next()`
4. Emit the recommendation via a callback or return value
## Testing
1. **Local test with tc/netem**: Use Linux traffic control to simulate loss/latency:
```bash
# Simulate 10% loss, 150ms RTT
tc qdisc add dev lo root netem loss 10% delay 75ms
# Run 2 clients in auto mode, verify they switch to DEGRADED
```
2. **CLI test**: Run `wzp-client --profile auto` between two instances with simulated network conditions
3. **Relay quality reports**: Verify the relay actually sends QualityReport messages. If it doesn't yet, that needs to be implemented first (check relay code).
## Open Questions
1. **Does the relay currently send QualityReports?** If not, Phase 1 is blocked until the relay implements per-client loss/RTT tracking and report generation. The relay sees all packets and can compute loss % per sender.
2. **Codec2 3.2k placement**: Should auto mode use Codec2 3.2k between DEGRADED and CATASTROPHIC? It's 20ms frames (lower latency than Opus 6k's 40ms) but speech-only quality.
3. **Cross-client adaptation**: If client A is on GOOD and client B auto-adapts to CATASTROPHIC, client A still sends Opus 24k. Client B can decode it fine (auto-switch on recv). But should A also be told to lower quality to save B's bandwidth? This requires signaling between clients.
## Milestones
| Phase | Scope | Effort | Dependency |
|-------|-------|--------|------------|
| 0 | Verify relay sends QualityReports | 0.5 day | None |
| 1a | Wire QualityAdapter in Android engine | 1 day | Phase 0 |
| 1b | Wire QualityAdapter in desktop engine | 1 day | Phase 0 |
| 1c | UI indicator (current codec) | 0.5 day | Phase 1a/1b |
| 2 | Extended 5-tier classification | 0.5 day | Phase 1 |
| 3 | Bandwidth probing | 2 days | Phase 2 |

View File

@@ -1,170 +0,0 @@
# PRD: Relay Federation (Multi-Relay Mesh)
## Problem
Currently all participants in a call must connect to the same relay. This creates:
- **Single point of failure** — if the relay goes down, the entire call drops
- **Geographic latency** — users far from the relay get high RTT
- **Capacity limits** — one relay handles all traffic
Users should be able to connect to their nearest/preferred relay and still talk to users on other relays, as long as the relays are federated.
## Prerequisite: Fix Relay Identity Persistence
### Bug: TLS certificate regenerates on every restart
**Root cause:** `wzp-transport/src/config.rs:17` calls `rcgen::generate_simple_self_signed()` which creates a new keypair every time. The relay's Ed25519 identity seed IS persisted to `~/.wzp/relay-identity`, but the TLS certificate is not derived from it.
**Impact:** Clients see a different server fingerprint after every relay restart, triggering the "Server Key Changed" warning. This also breaks federation since relays identify each other by certificate fingerprint.
**Fix:** Derive the TLS certificate from the persisted relay seed:
1. Add `server_config_from_seed(seed: &[u8; 32])` to `wzp-transport`
2. Use the seed to create a deterministic keypair (e.g., derive an ECDSA key via HKDF from the Ed25519 seed)
3. Generate a self-signed cert with that keypair — same seed = same cert = same fingerprint
4. The relay passes its loaded seed to `server_config_from_seed()` instead of `server_config()`
**Effort:** 0.5 day
## Federation Design
### Core Concept
Two or more relays form a **federation mesh**. Each relay is an independent SFU. When relays are configured to trust each other, they bridge rooms with matching names — participants on relay A in room "podcast" hear participants on relay B in room "podcast" as if everyone were on the same relay.
### Configuration
Each relay reads a YAML config file (e.g., `~/.wzp/relay.yaml` or `--config relay.yaml`):
```yaml
# Relay identity (auto-generated if missing)
listen: 0.0.0.0:4433
# Federation peers — other relays we trust and bridge rooms with
# Both sides must configure each other for federation to work
peers:
- url: "193.180.213.68:4433"
fingerprint: "a5d6:e3c6:5ae7:185c:4eb1:af89:daed:4a43"
label: "Pangolin EU"
- url: "10.0.0.5:4433"
fingerprint: "7f2a:b391:0c44:..."
label: "Office LAN"
```
**Key rules:**
- Both relays must configure each other — **mutual trust** required
- A relay that receives a connection from an unknown peer logs: `"Relay a5d6:e3c6:... (193.180.213.68) wants to federate. To accept, add to peers config: url: 193.180.213.68:4433, fingerprint: a5d6:e3c6:..."`
- Fingerprints are verified via the TLS certificate (requires the identity fix above)
### Protocol
#### Peer Connection
1. On startup, each relay attempts QUIC connections to all configured peers
2. The connection uses SNI `"_federation"` (reserved room name prefix) to distinguish from client connections
3. After QUIC handshake, verify the peer's certificate fingerprint matches the configured fingerprint
4. If fingerprint mismatch → reject, log warning
5. If peer connects but isn't in our config → log the helpful "add to config" message, reject
#### Room Bridging
Once two relays are connected:
1. **Room discovery**: When a local participant joins room "T", the relay sends a `FederationRoomJoin { room: "T" }` signal to all connected peers
2. **Room leave**: When the last local participant leaves room "T", send `FederationRoomLeave { room: "T" }`
3. **Media forwarding**: For each room that exists on both relays:
- Relay A forwards all media packets from its local participants to relay B
- Relay B forwards all media packets from its local participants to relay A
- Each relay then fans out received federated media to its local participants (same as local SFU forwarding)
4. **Participant presence**: `RoomUpdate` signals are merged — local participants + federated participants from all peers
```
Relay A (2 local users) Relay B (1 local user)
┌─────────────────────┐ ┌─────────────────────┐
│ Room "T" │ │ Room "T" │
│ Alice (local) ────┼──media──►│ Charlie (local) │
│ Bob (local) ────┼──media──►│ │
│ │◄──media──┼── Charlie │
│ Charlie (federated)│ │ Alice (federated) │
│ │ │ Bob (federated) │
└─────────────────────┘ └─────────────────────┘
```
#### Signal Messages (new)
```rust
enum FederationSignal {
/// A room exists on this relay with active participants
RoomJoin { room: String, participants: Vec<ParticipantInfo> },
/// Room is empty on this relay
RoomLeave { room: String },
/// Participant update for a federated room
ParticipantUpdate { room: String, participants: Vec<ParticipantInfo> },
}
```
#### Media Forwarding
Federated media is forwarded as raw QUIC datagrams — the relay doesn't decode/re-encode. Each packet is prefixed with a room identifier so the receiving relay knows which room to fan it out to:
```
[room_hash: 8 bytes][original_media_packet]
```
The 8-byte room hash is computed once when the federation room bridge is established.
### What Relays DON'T Do
- **No transcoding** — media passes through as-is. If Alice sends Opus 64k, Charlie receives Opus 64k
- **No re-encryption** — packets are already encrypted end-to-end between participants. Relays just forward opaque bytes
- **No central coordinator** — each relay independently connects to its configured peers. No master/slave, no consensus protocol
- **No automatic peer discovery** — peers must be explicitly configured in YAML
### Failure Handling
- If a peer relay goes down, the federation link drops. Local rooms continue to work. Federated participants disappear from presence.
- Reconnection: attempt every 30 seconds with exponential backoff up to 5 minutes
- If a peer relay restarts with a new identity (bug not fixed), the fingerprint check fails and federation is rejected with a clear error log
## Implementation Plan
### Phase 0: Fix Relay Identity (prerequisite)
- Derive TLS cert from persisted seed
- Same seed → same cert → same fingerprint across restarts
### Phase 1: YAML Config + Peer Connection
- Add `--config relay.yaml` CLI flag
- Parse peers config
- On startup, connect to all configured peers via QUIC
- Verify certificate fingerprints
- Log helpful message for unconfigured peers
- Reconnect on disconnect
### Phase 2: Room Bridging
- Track which rooms exist on each peer
- Forward media for shared rooms
- Merge participant presence across peers
- Handle room join/leave signals
### Phase 3: Resilience
- Graceful handling of peer disconnect/reconnect
- Don't duplicate packets if a participant is reachable via multiple paths
- Rate limiting on federation links (prevent amplification)
- Metrics: federated rooms, packets forwarded, peer latency
## Effort Estimates
| Phase | Scope | Effort |
|-------|-------|--------|
| 0 | Fix relay TLS identity from seed | 0.5 day |
| 1 | YAML config + peer QUIC connections | 2 days |
| 2 | Room bridging + media forwarding + presence merge | 3-4 days |
| 3 | Resilience + metrics | 2 days |
## Non-Goals (v1)
- Automatic peer discovery (mDNS, DHT, etc.)
- Cascading federation (relay A ↔ B ↔ C where A doesn't know C)
- Load balancing across relays
- Encryption between relays (QUIC provides transport encryption; e2e encryption between participants is orthogonal)
- Different rooms on different relays (all federated rooms are bridged by name)

View File

@@ -1,75 +0,0 @@
# =============================================================================
# WZ Phone — Android build environment (Debian 12 / Bookworm)
#
# Matches the bare-metal build-android.sh environment:
# - Debian 12 (cmake 3.25, no Android cross-compilation bugs)
# - JDK 17 (Gradle 8.5 + AGP 8.2.0 compatible)
# - NDK 26.1 (last stable before scudo/MTE crash on NDK 27+)
# - Rust stable with aarch64-linux-android target + cargo-ndk
#
# Build: docker build -t wzp-android-builder -f Dockerfile.android-builder .
# =============================================================================
FROM debian:bookworm
ARG NDK_VERSION=26.1.10909125
ARG ANDROID_API=34
ENV DEBIAN_FRONTEND=noninteractive \
ANDROID_HOME=/opt/android-sdk \
JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
ENV ANDROID_NDK_HOME=$ANDROID_HOME/ndk/$NDK_VERSION \
ANDROID_NDK=$ANDROID_HOME/ndk/$NDK_VERSION
# ── System packages ──────────────────────────────────────────────────────────
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
curl \
git \
libssl-dev \
pkg-config \
unzip \
wget \
zip \
openjdk-17-jdk-headless \
ca-certificates \
libasound2-dev \
&& rm -rf /var/lib/apt/lists/*
# ── Android SDK + NDK 26.1 ──────────────────────────────────────────────────
RUN mkdir -p $ANDROID_HOME/cmdline-tools \
&& cd /tmp \
&& wget -q https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip -O cmdtools.zip \
&& unzip -qo cmdtools.zip -d $ANDROID_HOME/cmdline-tools \
&& mv $ANDROID_HOME/cmdline-tools/cmdline-tools $ANDROID_HOME/cmdline-tools/latest \
&& rm cmdtools.zip
RUN yes | $ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager --licenses > /dev/null 2>&1 \
&& $ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager --install \
"platforms;android-${ANDROID_API}" \
"build-tools;${ANDROID_API}.0.0" \
"ndk;${NDK_VERSION}" \
"platform-tools" \
2>&1 | grep -v '^\[' > /dev/null
# Make SDK world-readable so builder user can access it
RUN chmod -R a+rX $ANDROID_HOME
# ── Builder user (1000:1000) ─────────────────────────────────────────────────
RUN groupadd -g 1000 builder \
&& useradd -m -u 1000 -g 1000 -s /bin/bash builder
USER builder
WORKDIR /home/builder
# ── Rust toolchain ───────────────────────────────────────────────────────────
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs \
| sh -s -- -y --default-toolchain stable \
&& . $HOME/.cargo/env \
&& rustup target add aarch64-linux-android \
&& cargo install cargo-ndk
ENV PATH="/home/builder/.cargo/bin:$ANDROID_HOME/cmdline-tools/latest/bin:$ANDROID_HOME/platform-tools:$JAVA_HOME/bin:$PATH"
WORKDIR /build/source

View File

@@ -1,159 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Build Android APK via Docker on SepehrHomeserverdk, upload to rustypaste,
# notify via ntfy.sh/wzp. Fire and forget.
#
# Usage:
# ./scripts/build-and-notify.sh Build + upload + notify
# ./scripts/build-and-notify.sh --rust Force Rust rebuild
# ./scripts/build-and-notify.sh --pull Git pull before building
# ./scripts/build-and-notify.sh --install Also download + adb install locally
REMOTE_HOST="SepehrHomeserverdk"
BASE_DIR="/mnt/storage/manBuilder"
NTFY_TOPIC="https://ntfy.sh/wzp"
LOCAL_OUTPUT="target/android-apk"
SSH_OPTS="-o ConnectTimeout=15 -o ServerAliveInterval=15 -o ServerAliveCountMax=4 -o LogLevel=ERROR"
REBUILD_RUST=0
DO_PULL=0
DO_INSTALL=0
for arg in "$@"; do
case "$arg" in
--rust) REBUILD_RUST=1 ;;
--pull) DO_PULL=1 ;;
--install) DO_INSTALL=1 ;;
esac
done
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
ssh_cmd() { ssh -A $SSH_OPTS "$REMOTE_HOST" "$@"; }
# Upload the remote build script
log "Uploading build script to remote..."
ssh_cmd "cat > /tmp/wzp-docker-build.sh" <<'REMOTE_SCRIPT'
#!/usr/bin/env bash
set -euo pipefail
BASE_DIR="/mnt/storage/manBuilder"
NTFY_TOPIC="https://ntfy.sh/wzp"
REBUILD_RUST="${1:-0}"
DO_PULL="${2:-0}"
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
trap 'notify "WZP Android build FAILED! Check /tmp/wzp-build.log"' ERR
# Pull if requested
if [ "$DO_PULL" = "1" ]; then
echo ">>> Pulling latest..."
cd "$BASE_DIR/data/source"
git checkout -- . 2>/dev/null || true
git pull origin feat/android-voip-client 2>&1 | tail -3
fi
# Clean Rust if requested
if [ "$REBUILD_RUST" = "1" ]; then
echo ">>> Cleaning Rust target..."
rm -rf "$BASE_DIR/data/cache/target/aarch64-linux-android/release"
fi
# Fix perms
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache" \
! -user 1000 -o ! -group 1000 2>/dev/null | \
xargs -r chown 1000:1000 2>/dev/null || true
# Clean jniLibs
rm -rf "$BASE_DIR/data/source/android/app/src/main/jniLibs/arm64-v8a"
notify "WZP build started..."
echo ">>> Building in Docker..."
docker run --rm --user 1000:1000 \
-v "$BASE_DIR/data/source:/build/source" \
-v "$BASE_DIR/data/cache/cargo-registry:/home/builder/.cargo/registry" \
-v "$BASE_DIR/data/cache/cargo-git:/home/builder/.cargo/git" \
-v "$BASE_DIR/data/cache/target:/build/source/target" \
-v "$BASE_DIR/data/cache/gradle:/home/builder/.gradle" \
wzp-android-builder bash -c '
set -euo pipefail
cd /build/source
echo ">>> Rust build..."
cargo ndk -t arm64-v8a -o android/app/src/main/jniLibs build --release -p wzp-android 2>&1 | tail -5
echo ">>> Checking .so files..."
# cargo-ndk may not copy libc++_shared.so — grab it from the NDK if missing
if [ ! -f android/app/src/main/jniLibs/arm64-v8a/libc++_shared.so ]; then
echo ">>> libc++_shared.so missing, copying from NDK..."
NDK_LIBCXX=$(find "$ANDROID_NDK_HOME" -name "libc++_shared.so" -path "*/aarch64-linux-android/*" | head -1)
if [ -n "$NDK_LIBCXX" ]; then
cp "$NDK_LIBCXX" android/app/src/main/jniLibs/arm64-v8a/
echo "Copied from: $NDK_LIBCXX"
else
echo "WARNING: libc++_shared.so not found in NDK, APK may crash at runtime"
fi
fi
ls -lh android/app/src/main/jniLibs/arm64-v8a/
[ -f android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ] || { echo "ERROR: libwzp_android.so missing!"; exit 1; }
echo ">>> APK build..."
cd android && chmod +x gradlew
./gradlew clean assembleDebug --no-daemon --warning-mode=none 2>&1 | tail -3
echo "APK_BUILT"
'
# Upload to rustypaste
echo ">>> Uploading to rustypaste..."
source "$BASE_DIR/.env"
APK=$(find "$BASE_DIR/data/source/android" -name "app-debug*.apk" -path "*/outputs/apk/*" | head -1)
if [ -n "$APK" ]; then
URL=$(curl -s -F "file=@$APK" -H "Authorization: $rusty_auth_token" "$rusty_address")
echo "UPLOAD_URL=$URL"
notify "WZP build done! APK: $URL"
echo ">>> Done! APK at: $URL"
else
notify "WZP build FAILED - no APK"
echo "ERROR: No APK found"
exit 1
fi
REMOTE_SCRIPT
ssh_cmd "chmod +x /tmp/wzp-docker-build.sh"
# Run in tmux
log "Starting build in tmux..."
ssh_cmd "tmux kill-session -t wzp-build 2>/dev/null; true"
ssh_cmd "tmux new-session -d -s wzp-build '/tmp/wzp-docker-build.sh $REBUILD_RUST $DO_PULL 2>&1 | tee /tmp/wzp-build.log'"
log "Build running! You'll get a notification on ntfy.sh/wzp with the download URL."
echo ""
echo " Monitor: ssh $REMOTE_HOST 'tail -f /tmp/wzp-build.log'"
echo " Status: ssh $REMOTE_HOST 'tail -5 /tmp/wzp-build.log'"
echo ""
# Optionally wait and install locally
if [ "$DO_INSTALL" = "1" ]; then
log "Waiting for build to finish..."
while true; do
sleep 15
if ssh_cmd "grep -q 'UPLOAD_URL\|ERROR' /tmp/wzp-build.log 2>/dev/null"; then
break
fi
done
URL=$(ssh_cmd "grep UPLOAD_URL /tmp/wzp-build.log | tail -1 | cut -d= -f2")
if [ -n "$URL" ]; then
log "Downloading APK..."
mkdir -p "$LOCAL_OUTPUT"
curl -s -o "$LOCAL_OUTPUT/wzp-debug.apk" "$URL"
log "Installing..."
adb uninstall com.wzp.phone 2>/dev/null || true
adb install "$LOCAL_OUTPUT/wzp-debug.apk"
log "Done!"
else
err "Build failed"
fi
fi

View File

@@ -1,416 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# WZ Phone — Android APK build via Docker on remote host
#
# Replaces Hetzner Cloud VMs with a Docker container on SepehrHomeserverdk.
# Persistent storage at /mnt/storage/manBuilder/data/{source,cache,keystore}.
# Uploads APKs to rustypaste, then SCPs them back locally.
#
# Prerequisites:
# - SSH config has "SepehrHomeserverdk" host entry
# - SSH agent running with keys for both remote host and git.manko.yoga
# - Docker installed on remote host
# - /mnt/storage/manBuilder/.env with rusty_address and rusty_auth_token
#
# Usage:
# ./scripts/build-android-docker.sh Full: prepare+pull+build+upload+transfer
# ./scripts/build-android-docker.sh --prepare Build Docker image + sync keystores
# ./scripts/build-android-docker.sh --pull Clone/update source from Gitea
# ./scripts/build-android-docker.sh --build Build debug APK inside Docker
# ./scripts/build-android-docker.sh --upload Upload APKs to rustypaste
# ./scripts/build-android-docker.sh --transfer SCP APKs back to local machine
# ./scripts/build-android-docker.sh --all pull+build+upload+transfer (image ready)
#
# Add --release to also build release APK:
# ./scripts/build-android-docker.sh --build --release
# ./scripts/build-android-docker.sh --all --release
# ./scripts/build-android-docker.sh --release (full pipeline, debug+release)
#
# Environment variables (all optional):
# WZP_BRANCH Branch to build (default: feat/android-voip-client)
# =============================================================================
REMOTE_HOST="SepehrHomeserverdk"
BASE_DIR="/mnt/storage/manBuilder"
REPO_URL="ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git"
BRANCH="${WZP_BRANCH:-feat/android-voip-client}"
DOCKER_IMAGE="wzp-android-builder"
LOCAL_OUTPUT_DIR="target/android-apk"
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
LOCAL_KEYSTORE_DIR="$PROJECT_DIR/android/keystore"
SSH_OPTS="-o ConnectTimeout=10 -o LogLevel=ERROR -o ServerAliveInterval=15 -o ServerAliveCountMax=4"
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
log() { echo -e "\n\033[1;36m>>> $*\033[0m"; }
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
ssh_cmd() {
ssh -A $SSH_OPTS "$REMOTE_HOST" "$@"
}
push_reminder() {
echo ""
echo " ┌──────────────────────────────────────────────────────────────────┐"
echo " │ IMPORTANT: Push your changes to origin (Gitea) before build! │"
echo " │ │"
echo " │ The build fetches from: │"
echo " │ ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git │"
echo " │ │"
echo " │ Run: git push origin $BRANCH"
echo " └──────────────────────────────────────────────────────────────────┘"
echo ""
read -r -p "Press Enter to continue (Ctrl-C to abort)... "
}
# ---------------------------------------------------------------------------
# --prepare: Create remote dirs, build Docker image, sync keystores
# ---------------------------------------------------------------------------
do_prepare() {
log "Preparing remote environment..."
ssh_cmd "mkdir -p $BASE_DIR/data/{source,cache/cargo-registry,cache/cargo-git,cache/target,cache/gradle,keystore}"
# Sync keystores (gitignored — won't exist after clone)
REMOTE_HAS_KEYSTORE=$(ssh_cmd "[ -f $BASE_DIR/data/keystore/wzp-debug.jks ] && echo yes || echo no")
if [ "$REMOTE_HAS_KEYSTORE" = "no" ]; then
if [ -f "$LOCAL_KEYSTORE_DIR/wzp-debug.jks" ]; then
log "Uploading keystores to remote persistent storage..."
scp $SSH_OPTS \
"$LOCAL_KEYSTORE_DIR/wzp-debug.jks" \
"$LOCAL_KEYSTORE_DIR/wzp-release.jks" \
"$REMOTE_HOST:$BASE_DIR/data/keystore/"
echo " Keystores uploaded to $BASE_DIR/data/keystore/"
else
err "No keystores found locally at $LOCAL_KEYSTORE_DIR/"
err "Build will generate a temporary debug keystore instead."
fi
else
echo " Keystores already on remote."
fi
# Upload Dockerfile from local (always use local version — no git dependency)
log "Uploading Dockerfile to remote..."
ssh_cmd "mkdir -p $BASE_DIR/data/source/scripts"
scp $SSH_OPTS \
"$PROJECT_DIR/scripts/Dockerfile.android-builder" \
"$REMOTE_HOST:$BASE_DIR/data/source/scripts/Dockerfile.android-builder"
# Build Docker image
log "Building Docker image (Debian 12 + Rust + Android SDK/NDK)..."
ssh_cmd bash <<IMAGE_EOF
set -euo pipefail
docker build -t "$DOCKER_IMAGE" - < "$BASE_DIR/data/source/scripts/Dockerfile.android-builder"
echo " Docker image '$DOCKER_IMAGE' ready."
IMAGE_EOF
}
# ---------------------------------------------------------------------------
# --pull: Clone or update source from Gitea
# ---------------------------------------------------------------------------
do_pull() {
push_reminder
log "Updating source (branch: $BRANCH)..."
ssh_cmd bash <<PULL_EOF
set -euo pipefail
mkdir -p "$BASE_DIR/data/source" \
"$BASE_DIR/data/cache/cargo-registry" \
"$BASE_DIR/data/cache/cargo-git" \
"$BASE_DIR/data/cache/target" \
"$BASE_DIR/data/cache/gradle" \
"$BASE_DIR/data/keystore"
cd "$BASE_DIR/data/source"
if [ -d .git ]; then
echo " Fetching origin..."
git fetch origin
git checkout "$BRANCH" 2>/dev/null || git checkout -b "$BRANCH" "origin/$BRANCH"
git reset --hard "origin/$BRANCH"
else
echo " Cloning repo..."
cd "$BASE_DIR/data"
rm -rf source
git clone --branch "$BRANCH" "$REPO_URL" source
cd source
fi
git submodule update --init || true
echo " HEAD: \$(git log --oneline -1)"
echo " Branch: \$(git branch --show-current)"
PULL_EOF
# Inject keystores into source tree
log "Injecting keystores into source tree..."
ssh_cmd bash <<KS_EOF
set -euo pipefail
mkdir -p "$BASE_DIR/data/source/android/keystore"
if [ -f "$BASE_DIR/data/keystore/wzp-debug.jks" ]; then
cp "$BASE_DIR/data/keystore/wzp-debug.jks" "$BASE_DIR/data/source/android/keystore/"
cp "$BASE_DIR/data/keystore/wzp-release.jks" "$BASE_DIR/data/source/android/keystore/"
echo " Keystores ready (wzp-debug.jks + wzp-release.jks)"
else
echo " WARNING: No keystores in persistent storage — build will generate temporary ones"
fi
KS_EOF
}
# ---------------------------------------------------------------------------
# --build: Build APK inside Docker container
# $1 = "1" to also build release APK (default: debug only)
# ---------------------------------------------------------------------------
do_build() {
local build_release="${1:-0}"
if [ "$build_release" = "1" ]; then
log "Building debug + release APKs inside Docker container..."
else
log "Building debug APK inside Docker container..."
fi
ssh_cmd bash <<BUILD_EOF
set -euo pipefail
# Ensure uid 1000 can write to mounted volumes
# Use find to only chown files not already 1000:1000, ignore errors on stubborn files
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache" \
! -user 1000 -o ! -group 1000 2>/dev/null | \
xargs -r chown 1000:1000 2>/dev/null || true
docker run --rm \
--user 1000:1000 \
-e BUILD_RELEASE="$build_release" \
-v "$BASE_DIR/data/source:/build/source" \
-v "$BASE_DIR/data/cache/cargo-registry:/home/builder/.cargo/registry" \
-v "$BASE_DIR/data/cache/cargo-git:/home/builder/.cargo/git" \
-v "$BASE_DIR/data/cache/target:/build/source/target" \
-v "$BASE_DIR/data/cache/gradle:/home/builder/.gradle" \
"$DOCKER_IMAGE" \
bash -c '
set -euo pipefail
cd /build/source
echo ">>> Building Rust native library (arm64-v8a, release)..."
# Clean stale jniLibs so cargo-ndk re-copies libc++_shared.so
rm -rf android/app/src/main/jniLibs/arm64-v8a
cargo ndk -t arm64-v8a \
-o android/app/src/main/jniLibs \
build --release -p wzp-android 2>&1 | tail -10
[ -f android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ] || {
echo "ERROR: libwzp_android.so not found after build"; exit 1;
}
echo " .so size: \$(du -h android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so | cut -f1)"
# Verify keystores exist (should have been injected by --pull)
if [ -f android/keystore/wzp-debug.jks ] && [ -f android/keystore/wzp-release.jks ]; then
echo " Keystores: wzp-debug.jks + wzp-release.jks (from persistent storage)"
else
echo "WARNING: Keystores missing — generating temporary debug keystore..."
mkdir -p android/keystore
keytool -genkey -v \
-keystore android/keystore/wzp-debug.jks \
-keyalg RSA -keysize 2048 -validity 10000 \
-alias wzp-debug -storepass android -keypass android \
-dname "CN=WZP Debug" 2>&1 | tail -1
cp android/keystore/wzp-debug.jks android/keystore/wzp-release.jks
fi
cd android
chmod +x ./gradlew
echo ">>> Building debug APK..."
./gradlew assembleDebug --no-daemon --warning-mode=none 2>&1 | tail -5
if [ "\${BUILD_RELEASE}" = "1" ]; then
echo ">>> Building release APK..."
./gradlew assembleRelease --no-daemon --warning-mode=none 2>&1 | tail -5 || \
echo " (release build failed — debug APK still available)"
fi
echo ""
echo ">>> Build artifacts:"
find . -name "*.apk" -path "*/outputs/apk/*" -exec ls -lh {} \;
'
BUILD_EOF
}
# ---------------------------------------------------------------------------
# --upload: Upload APKs to rustypaste
# ---------------------------------------------------------------------------
do_upload() {
log "Uploading APKs to rustypaste..."
UPLOAD_RESULT=$(ssh_cmd bash <<'UPLOAD_EOF'
set -euo pipefail
BASE_DIR="/mnt/storage/manBuilder"
ENV_FILE="$BASE_DIR/.env"
if [ ! -f "$ENV_FILE" ]; then
echo "ERROR: $ENV_FILE not found — create it with rusty_address and rusty_auth_token" >&2
exit 1
fi
source "$ENV_FILE"
if [ -z "${rusty_address:-}" ] || [ -z "${rusty_auth_token:-}" ]; then
echo "ERROR: rusty_address or rusty_auth_token not set in $ENV_FILE" >&2
exit 1
fi
upload_apk() {
local apk="$1" label="$2"
if [ -f "$apk" ]; then
local url
url=$(curl -s -F "file=@$apk" -H "Authorization: $rusty_auth_token" "$rusty_address")
echo "$label: $url"
fi
}
DEBUG_APK=$(find "$BASE_DIR/data/source/android" -name "app-debug*.apk" -path "*/outputs/apk/*" 2>/dev/null | head -1)
RELEASE_APK=$(find "$BASE_DIR/data/source/android" -name "app-release*.apk" -path "*/outputs/apk/*" 2>/dev/null | head -1)
upload_apk "${DEBUG_APK:-}" "debug"
upload_apk "${RELEASE_APK:-}" "release"
UPLOAD_EOF
)
echo "$UPLOAD_RESULT"
}
# ---------------------------------------------------------------------------
# --transfer: SCP APKs back to local machine
# ---------------------------------------------------------------------------
do_transfer() {
log "Downloading APKs to local machine..."
mkdir -p "$LOCAL_OUTPUT_DIR"
# Debug APK
DEBUG_REMOTE=$(ssh_cmd "find $BASE_DIR/data/source/android -name 'app-debug*.apk' -path '*/outputs/apk/*' 2>/dev/null | head -1" || true)
if [ -n "$DEBUG_REMOTE" ]; then
scp $SSH_OPTS "$REMOTE_HOST:$DEBUG_REMOTE" "$LOCAL_OUTPUT_DIR/wzp-debug.apk"
echo " debug: $LOCAL_OUTPUT_DIR/wzp-debug.apk ($(du -h "$LOCAL_OUTPUT_DIR/wzp-debug.apk" | cut -f1))"
fi
# Release APK
RELEASE_REMOTE=$(ssh_cmd "find $BASE_DIR/data/source/android -name 'app-release*.apk' -path '*/outputs/apk/*' 2>/dev/null | head -1" || true)
if [ -n "$RELEASE_REMOTE" ]; then
scp $SSH_OPTS "$REMOTE_HOST:$RELEASE_REMOTE" "$LOCAL_OUTPUT_DIR/wzp-release.apk"
echo " release: $LOCAL_OUTPUT_DIR/wzp-release.apk ($(du -h "$LOCAL_OUTPUT_DIR/wzp-release.apk" | cut -f1))"
fi
# Also grab the .so
scp $SSH_OPTS "$REMOTE_HOST:$BASE_DIR/data/source/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so" \
"$LOCAL_OUTPUT_DIR/libwzp_android.so" 2>/dev/null \
&& echo " .so: $LOCAL_OUTPUT_DIR/libwzp_android.so" || true
}
# ---------------------------------------------------------------------------
# Summary banner
# ---------------------------------------------------------------------------
show_summary() {
log "All done!"
echo ""
echo " ┌──────────────────────────────────────────────────────────────┐"
[ -f "$LOCAL_OUTPUT_DIR/wzp-debug.apk" ] && \
echo " │ Debug APK: $LOCAL_OUTPUT_DIR/wzp-debug.apk"
[ -f "$LOCAL_OUTPUT_DIR/wzp-release.apk" ] && \
echo " │ Release APK: $LOCAL_OUTPUT_DIR/wzp-release.apk"
echo " │"
if [ -n "${UPLOAD_RESULT:-}" ]; then
echo " │ Rustypaste:"
echo "$UPLOAD_RESULT" | while read -r line; do
echo "$line"
done
echo " │"
fi
echo " │ Install: adb install -r $LOCAL_OUTPUT_DIR/wzp-debug.apk"
echo " └──────────────────────────────────────────────────────────────┘"
}
# ---------------------------------------------------------------------------
# Parse arguments
# ---------------------------------------------------------------------------
ACTION=""
BUILD_RELEASE=0
for arg in "$@"; do
case "$arg" in
--release) BUILD_RELEASE=1 ;;
--prepare|--pull|--build|--upload|--transfer|--all)
if [ -n "$ACTION" ]; then
err "Multiple actions specified: $ACTION and $arg"
exit 1
fi
ACTION="$arg"
;;
*)
echo "Usage: $0 [--prepare|--pull|--build|--upload|--transfer|--all] [--release]"
echo ""
echo "Actions:"
echo " (no action) Full pipeline: pull → prepare → build → upload → transfer"
echo " --prepare Build Docker image + sync keystores to remote"
echo " --pull Clone/update source from Gitea + inject keystores"
echo " --build Build debug APK inside Docker container"
echo " --upload Upload APKs to rustypaste"
echo " --transfer SCP APKs + .so back to local machine"
echo " --all pull → build → upload → transfer (Docker image ready)"
echo ""
echo "Flags:"
echo " --release Also build release APK (default: debug only)"
echo ""
echo "Examples:"
echo " $0 # full pipeline, debug only"
echo " $0 --release # full pipeline, debug + release"
echo " $0 --build # debug APK only"
echo " $0 --build --release # debug + release APKs"
echo " $0 --all # iterate: pull+build+upload+transfer (debug)"
echo " $0 --all --release # iterate with release too"
echo ""
echo "Environment:"
echo " WZP_BRANCH=$BRANCH"
exit 1
;;
esac
done
# ---------------------------------------------------------------------------
# Dispatch
# ---------------------------------------------------------------------------
case "${ACTION:-}" in
--prepare)
do_prepare
;;
--pull)
do_pull
;;
--build)
do_build "$BUILD_RELEASE"
;;
--upload)
do_upload
;;
--transfer)
do_transfer
;;
--all)
do_pull
do_build "$BUILD_RELEASE"
do_upload
do_transfer
show_summary
;;
"")
do_pull
do_prepare
do_build "$BUILD_RELEASE"
do_upload
do_transfer
show_summary
;;
esac

View File

@@ -1,161 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Build WarzonePhone Linux x86_64 binaries via Docker on SepehrHomeserverdk.
# Reuses same Docker image as Android build (has Rust + cmake + build tools).
# Fire and forget — notifies via ntfy.sh/wzp with rustypaste URL.
#
# Usage:
# ./scripts/build-linux-docker.sh Build + upload + notify
# ./scripts/build-linux-docker.sh --pull Git pull before building
# ./scripts/build-linux-docker.sh --clean Clean Rust target cache
# ./scripts/build-linux-docker.sh --install Download binaries locally after build
REMOTE_HOST="SepehrHomeserverdk"
BASE_DIR="/mnt/storage/manBuilder"
NTFY_TOPIC="https://ntfy.sh/wzp"
LOCAL_OUTPUT="target/linux-x86_64"
SSH_OPTS="-o ConnectTimeout=15 -o ServerAliveInterval=15 -o ServerAliveCountMax=4 -o LogLevel=ERROR"
DO_PULL=0
DO_CLEAN=0
DO_INSTALL=0
for arg in "$@"; do
case "$arg" in
--pull) DO_PULL=1 ;;
--clean) DO_CLEAN=1 ;;
--install) DO_INSTALL=1 ;;
esac
done
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
ssh_cmd() { ssh $SSH_OPTS "$REMOTE_HOST" "$@"; }
# Upload build script to remote
log "Uploading build script..."
ssh_cmd "cat > /tmp/wzp-linux-build.sh" <<'REMOTE_SCRIPT'
#!/usr/bin/env bash
set -euo pipefail
BASE_DIR="/mnt/storage/manBuilder"
NTFY_TOPIC="https://ntfy.sh/wzp"
DO_PULL="${1:-0}"
DO_CLEAN="${2:-0}"
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
trap 'notify "WZP Linux build FAILED! Check /tmp/wzp-linux-build.log"' ERR
if [ "$DO_PULL" = "1" ]; then
echo ">>> Pulling latest..."
cd "$BASE_DIR/data/source"
git checkout -- . 2>/dev/null || true
git pull origin feat/android-voip-client 2>&1 | tail -3
fi
if [ "$DO_CLEAN" = "1" ]; then
echo ">>> Cleaning Linux target cache..."
rm -rf "$BASE_DIR/data/cache-linux/target"
fi
# Ensure cache dirs exist (separate from Android cache)
mkdir -p "$BASE_DIR/data/cache-linux/target" \
"$BASE_DIR/data/cache-linux/cargo-registry" \
"$BASE_DIR/data/cache-linux/cargo-git"
# Fix perms
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache-linux" \
! -user 1000 -o ! -group 1000 2>/dev/null | \
xargs -r chown 1000:1000 2>/dev/null || true
notify "WZP Linux x86_64 build started..."
echo ">>> Building in Docker..."
docker run --rm --user 1000:1000 \
-v "$BASE_DIR/data/source:/build/source" \
-v "$BASE_DIR/data/cache-linux/cargo-registry:/home/builder/.cargo/registry" \
-v "$BASE_DIR/data/cache-linux/cargo-git:/home/builder/.cargo/git" \
-v "$BASE_DIR/data/cache-linux/target:/build/source/target" \
wzp-android-builder bash -c '
set -euo pipefail
cd /build/source
echo ">>> Building relay + client + web + bench..."
cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-web --bin wzp-bench 2>&1 | tail -5
echo ">>> Building audio client..."
cargo build --release --bin wzp-client --features audio 2>&1 | tail -3
cp target/release/wzp-client target/release/wzp-client-audio
cargo build --release --bin wzp-client 2>&1 | tail -3
echo ">>> Binaries:"
ls -lh target/release/wzp-relay target/release/wzp-client target/release/wzp-client-audio target/release/wzp-web target/release/wzp-bench
echo ">>> Packaging..."
tar czf /tmp/wzp-linux-x86_64.tar.gz \
-C target/release wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench
echo "BINARIES_BUILT"
'
# Upload to rustypaste
echo ">>> Uploading to rustypaste..."
source "$BASE_DIR/.env"
TARBALL="$BASE_DIR/data/cache-linux/target/release/../../../wzp-linux-x86_64.tar.gz"
# Docker wrote to /tmp inside container, copy from target mount
docker run --rm \
-v "$BASE_DIR/data/cache-linux/target:/build/target" \
wzp-android-builder bash -c \
"cp /build/target/release/wzp-relay /build/target/release/wzp-client /build/target/release/wzp-client-audio /build/target/release/wzp-web /build/target/release/wzp-bench /tmp/ && tar czf /tmp/wzp-linux-x86_64.tar.gz -C /tmp wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench && cat /tmp/wzp-linux-x86_64.tar.gz" \
> /tmp/wzp-linux-x86_64.tar.gz
URL=$(curl -s -F "file=@/tmp/wzp-linux-x86_64.tar.gz" -H "Authorization: $rusty_auth_token" "$rusty_address")
if [ -n "$URL" ]; then
echo "UPLOAD_URL=$URL"
notify "WZP Linux x86_64 binaries ready! $URL"
echo ">>> Done! Binaries at: $URL"
else
notify "WZP Linux build FAILED - upload error"
echo "ERROR: upload failed"
exit 1
fi
REMOTE_SCRIPT
ssh_cmd "chmod +x /tmp/wzp-linux-build.sh"
# Run in tmux
log "Starting Linux build in tmux..."
ssh_cmd "tmux kill-session -t wzp-linux 2>/dev/null; true"
ssh_cmd "tmux new-session -d -s wzp-linux '/tmp/wzp-linux-build.sh $DO_PULL $DO_CLEAN 2>&1 | tee /tmp/wzp-linux-build.log'"
log "Build running! Notification on ntfy.sh/wzp when done."
echo ""
echo " Monitor: ssh $REMOTE_HOST 'tail -f /tmp/wzp-linux-build.log'"
echo " Status: ssh $REMOTE_HOST 'tail -5 /tmp/wzp-linux-build.log'"
echo ""
# Optionally wait and download
if [ "$DO_INSTALL" = "1" ]; then
log "Waiting for build..."
while true; do
sleep 15
if ssh_cmd "grep -q 'UPLOAD_URL\|ERROR' /tmp/wzp-linux-build.log 2>/dev/null"; then
break
fi
done
URL=$(ssh_cmd "grep UPLOAD_URL /tmp/wzp-linux-build.log | tail -1 | cut -d= -f2")
if [ -n "$URL" ]; then
log "Downloading binaries..."
mkdir -p "$LOCAL_OUTPUT"
curl -s -o "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz" "$URL"
tar xzf "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz" -C "$LOCAL_OUTPUT/"
rm "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz"
ls -lh "$LOCAL_OUTPUT"/wzp-*
log "Done! Binaries in $LOCAL_OUTPUT/"
else
err "Build failed"
fi
fi

View File

@@ -1,122 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Build WarzonePhone Linux x86_64 binaries via Hetzner Cloud VPS.
# Fire and forget — notifies via ntfy.sh/wzp with rustypaste URL.
#
# Usage:
# ./scripts/build-linux-notify.sh Full: create VM → build → upload → notify → destroy
# ./scripts/build-linux-notify.sh --keep Keep VM after build
# ./scripts/build-linux-notify.sh --pull Git pull (for existing VM)
SSH_KEY_NAME="wz"
SSH_KEY_PATH="/Users/manwe/CascadeProjects/wzp"
SERVER_TYPE="cx33"
IMAGE="debian-12"
SERVER_NAME="wzp-linux-builder"
NTFY_TOPIC="https://ntfy.sh/wzp"
LOCAL_OUTPUT="target/linux-x86_64"
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=15 -o ServerAliveInterval=15 -o LogLevel=ERROR"
KEEP_VM=0
DO_PULL=0
for arg in "$@"; do
case "$arg" in
--keep) KEEP_VM=1 ;;
--pull) DO_PULL=1 ;;
esac
done
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
get_vm_ip() {
hcloud server list -o columns=name,ipv4 -o noheader 2>/dev/null | grep "$SERVER_NAME" | awk '{print $2}' | tr -d ' '
}
ssh_cmd() {
local ip=$(get_vm_ip)
[ -n "$ip" ] || { err "No VM found"; exit 1; }
ssh $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip" "$@"
}
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
# --- Create VM if needed ---
existing=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
if [ -z "$existing" ]; then
log "Creating Hetzner VM ($SERVER_TYPE, $IMAGE)..."
hcloud server create --name "$SERVER_NAME" --type "$SERVER_TYPE" --image "$IMAGE" --ssh-key "$SSH_KEY_NAME" --location fsn1 --quiet
log "Waiting for SSH..."
ip=$(get_vm_ip)
for i in $(seq 1 30); do
ssh $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip" "echo ok" &>/dev/null && break
sleep 2
done
log "Installing deps..."
ssh_cmd "apt-get update -qq && apt-get install -y -qq build-essential cmake pkg-config libasound2-dev libssl-dev curl git > /dev/null 2>&1"
ssh_cmd "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable > /dev/null 2>&1"
fi
# --- Upload source ---
log "Uploading source..."
ip=$(get_vm_ip)
rsync -az --delete \
--exclude='target' --exclude='.git' --exclude='.claude' \
--exclude='node_modules' --exclude='dist' --exclude='android/app/build' \
-e "ssh $SSH_OPTS -i $SSH_KEY_PATH" \
"$PROJECT_DIR/" "root@$ip:/root/wzp-build/"
# --- Build ---
log "Building all binaries..."
notify "WZP Linux build started..."
ssh_cmd "source ~/.cargo/env && cd /root/wzp-build && \
cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-web --bin wzp-bench 2>&1 | tail -5 && \
echo '--- audio client ---' && \
cargo build --release --bin wzp-client --features audio 2>&1 | tail -3 && \
cp target/release/wzp-client target/release/wzp-client-audio && \
cargo build --release --bin wzp-client 2>&1 | tail -3 && \
echo 'BUILD_DONE' && \
ls -lh target/release/wzp-relay target/release/wzp-client target/release/wzp-client-audio target/release/wzp-web target/release/wzp-bench"
# --- Package + upload to rustypaste ---
log "Packaging and uploading..."
UPLOAD_URL=$(ssh_cmd "cd /root/wzp-build && \
tar czf /tmp/wzp-linux-x86_64.tar.gz \
-C target/release wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench \
-C /root/wzp-build/crates/wzp-web/static index.html audio-processor.js 2>/dev/null && \
curl -s -F 'file=@/tmp/wzp-linux-x86_64.tar.gz' \
-H 'Authorization: DAxAAGghkn1WKv1+RpPKkg==' \
https://paste.dk.manko.yoga")
if [ -n "$UPLOAD_URL" ]; then
notify "WZP Linux binaries ready! $UPLOAD_URL"
log "Uploaded: $UPLOAD_URL"
else
notify "WZP Linux build FAILED"
err "Upload failed"
fi
# --- Transfer locally ---
log "Downloading binaries..."
mkdir -p "$LOCAL_OUTPUT"
for bin in wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench; do
scp $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip:/root/wzp-build/target/release/$bin" "$LOCAL_OUTPUT/$bin" 2>/dev/null
done
ls -lh "$LOCAL_OUTPUT"/wzp-*
# --- Cleanup ---
if [ "$KEEP_VM" = "1" ]; then
log "VM kept alive. Destroy: hcloud server delete $SERVER_NAME"
else
log "Destroying VM..."
hcloud server delete "$SERVER_NAME"
fi
log "Done!"
echo " Deploy: scp $LOCAL_OUTPUT/wzp-relay user@server:~/wzp/"

View File

@@ -1,10 +0,0 @@
{
"version": 1,
"skills": {
"caveman": {
"source": "JuliusBrussee/caveman",
"sourceType": "github",
"computedHash": "aa7939fc4d1fe31484090290da77f2d21e026aa4b34b329d00e6630feb985d75"
}
}
}