Compare commits
113 Commits
debug/code
...
android-re
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
75bc72a884 | ||
|
|
6aa52accef | ||
|
|
d0c17317ea | ||
|
|
5799d18aee | ||
|
|
46c9ee1be3 | ||
|
|
b53eae9192 | ||
|
|
a3f54566d4 | ||
|
|
76e9fe5e43 | ||
|
|
b0a89d4f39 | ||
|
|
abc96e8887 | ||
|
|
3a6ae61f8d | ||
|
|
4c536d256b | ||
|
|
b0ec9ff4ab | ||
|
|
5855533a39 | ||
|
|
ed09c2e8cc | ||
|
|
f44306cc17 | ||
|
|
0b821585ab | ||
|
|
faec332a8c | ||
|
|
fe9ae276dc | ||
|
|
4fbf6770c4 | ||
|
|
30a893a73f | ||
|
|
d46f3b1deb | ||
|
|
0d3f0d4dcb | ||
|
|
c184d5e1f3 | ||
|
|
5d8e743cbf | ||
|
|
6694aebfd9 | ||
|
|
d27e85ecf2 | ||
|
|
39ac181d63 | ||
|
|
3351cb6473 | ||
|
|
54a4d91f3e | ||
|
|
3b962bd4cb | ||
|
|
1118eac752 | ||
|
|
f935bd69cd | ||
|
|
1c684f6b47 | ||
|
|
c92db7e9b7 | ||
|
|
c3bd657224 | ||
|
|
8b79cdc6fc | ||
|
|
2eab56beec | ||
|
|
7dadc1ddd6 | ||
|
|
be0441295a | ||
|
|
b9f4e7f102 | ||
|
|
28f4a0fb6f | ||
|
|
3d76acf528 | ||
|
|
f4b5996bdf | ||
|
|
fc721c4217 | ||
|
|
5c24adf1c1 | ||
|
|
8dbda3e052 | ||
|
|
c8a3aaacb6 | ||
|
|
54cb6c3b71 | ||
|
|
a3ebf5616f | ||
|
|
ff6d0444c0 | ||
|
|
8080713098 | ||
|
|
e813362395 | ||
|
|
d52b8befd6 | ||
|
|
0abecf7fd8 | ||
|
|
f4cc3b1a6b | ||
|
|
af4c89f5f0 | ||
|
|
406461d460 | ||
|
|
7064f484af | ||
|
|
1d2222a25a | ||
|
|
270e139f20 | ||
|
|
d9b2e0fd53 | ||
|
|
898c1ea32b | ||
|
|
b00db5dfdc | ||
|
|
bc8bb3d790 | ||
|
|
ea51d068e6 | ||
|
|
7271942c6a | ||
|
|
da84ed332c | ||
|
|
e50925e05a | ||
|
|
6be36e43c2 | ||
|
|
2f2720802d | ||
|
|
087bfd2335 | ||
|
|
0a05e62c7f | ||
|
|
b97f32ce46 | ||
|
|
d66d583583 | ||
|
|
d06cf66538 | ||
|
|
c8bcc5c974 | ||
|
|
760126b6ab | ||
|
|
53f8bf8fff | ||
|
|
b3cdad0c75 | ||
|
|
fa3c7f1cef | ||
|
|
68b56d9172 | ||
|
|
7973c8c6a3 | ||
|
|
3e9539e5da | ||
|
|
a1ccb3f390 | ||
|
|
7751439e2b | ||
|
|
20bc290c18 | ||
|
|
a8dc350a65 | ||
|
|
00fa109f07 | ||
|
|
1e40dec468 | ||
|
|
aecef0905d | ||
|
|
18f7faa279 | ||
|
|
eeb85aeac2 | ||
|
|
00b405aa87 | ||
|
|
d09e21965e | ||
|
|
97bcc79f9b | ||
|
|
264ef9c4d4 | ||
|
|
a9adb5cfd7 | ||
|
|
a39b074d6e | ||
|
|
9cab6e2347 | ||
|
|
5e93cb74f2 | ||
|
|
b56b4a759c | ||
|
|
6f99841cc7 | ||
|
|
3b0811ce2e | ||
|
|
9eed94850d | ||
|
|
5e9718aeb2 | ||
|
|
3093933602 | ||
|
|
4c6c909732 | ||
|
|
33fab9a049 | ||
|
|
31d2306915 | ||
|
|
4af7c5f94c | ||
|
|
6597b5bd86 | ||
|
|
ae9d8526dd |
72
.agents/skills/caveman/SKILL.md
Normal file
72
.agents/skills/caveman/SKILL.md
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
---
|
||||||
|
name: caveman
|
||||||
|
description: >
|
||||||
|
Ultra-compressed communication mode. Slash token usage ~75% by speaking like caveman
|
||||||
|
while keeping full technical accuracy. Use when user says "caveman mode", "talk like caveman",
|
||||||
|
"use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers
|
||||||
|
when token efficiency is requested.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Caveman Mode
|
||||||
|
|
||||||
|
## Core Rule
|
||||||
|
|
||||||
|
Respond like smart caveman. Cut articles, filler, pleasantries. Keep all technical substance.
|
||||||
|
|
||||||
|
## Grammar
|
||||||
|
|
||||||
|
- Drop articles (a, an, the)
|
||||||
|
- Drop filler (just, really, basically, actually, simply)
|
||||||
|
- Drop pleasantries (sure, certainly, of course, happy to)
|
||||||
|
- Short synonyms (big not extensive, fix not "implement a solution for")
|
||||||
|
- No hedging (skip "it might be worth considering")
|
||||||
|
- Fragments fine. No need full sentence
|
||||||
|
- Technical terms stay exact. "Polymorphism" stays "polymorphism"
|
||||||
|
- Code blocks unchanged. Caveman speak around code, not in code
|
||||||
|
- Error messages quoted exact. Caveman only for explanation
|
||||||
|
|
||||||
|
## Pattern
|
||||||
|
|
||||||
|
```
|
||||||
|
[thing] [action] [reason]. [next step].
|
||||||
|
```
|
||||||
|
|
||||||
|
Not:
|
||||||
|
> Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by...
|
||||||
|
|
||||||
|
Yes:
|
||||||
|
> Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
**User:** Why is my React component re-rendering?
|
||||||
|
|
||||||
|
**Normal (69 tokens):** "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."
|
||||||
|
|
||||||
|
**Caveman (19 tokens):** "New object ref each render. Inline object prop = new ref = re-render. Wrap in `useMemo`."
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**User:** How do I set up a PostgreSQL connection pool?
|
||||||
|
|
||||||
|
**Caveman:**
|
||||||
|
```
|
||||||
|
Use `pg` pool:
|
||||||
|
```
|
||||||
|
```js
|
||||||
|
const pool = new Pool({
|
||||||
|
max: 20,
|
||||||
|
idleTimeoutMillis: 30000,
|
||||||
|
connectionTimeoutMillis: 2000,
|
||||||
|
})
|
||||||
|
```
|
||||||
|
```
|
||||||
|
max = concurrent connections. Keep under DB limit. idleTimeout kill stale conn.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Boundaries
|
||||||
|
|
||||||
|
- Code: write normal. Caveman English only
|
||||||
|
- Git commits: normal
|
||||||
|
- PR descriptions: normal
|
||||||
|
- User say "stop caveman" or "normal mode": revert immediately
|
||||||
@@ -7,6 +7,8 @@ on:
|
|||||||
- 'feat/*'
|
- 'feat/*'
|
||||||
tags:
|
tags:
|
||||||
- 'v*'
|
- 'v*'
|
||||||
|
paths-ignore:
|
||||||
|
- '.gitea/**'
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
env:
|
env:
|
||||||
|
|||||||
43
.gitea/workflows/mirror-github.yml
Normal file
43
.gitea/workflows/mirror-github.yml
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
name: Mirror to GitHub
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- 'feat/*'
|
||||||
|
- 'feature/*'
|
||||||
|
tags:
|
||||||
|
- '*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
mirror:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: catthehacker/ubuntu:act-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Push to GitHub
|
||||||
|
env:
|
||||||
|
GH_SSH_KEY: ${{ secrets.GH_SSH_KEY }}
|
||||||
|
run: |
|
||||||
|
mkdir -p ~/.ssh
|
||||||
|
echo "${GH_SSH_KEY}" > ~/.ssh/id_ed25519
|
||||||
|
chmod 600 ~/.ssh/id_ed25519
|
||||||
|
ssh-keyscan github.com >> ~/.ssh/known_hosts 2>/dev/null
|
||||||
|
|
||||||
|
git remote add github git@github.com:manawenuz/wzp.git
|
||||||
|
|
||||||
|
# Push the current branch
|
||||||
|
BRANCH="${GITHUB_REF#refs/heads/}"
|
||||||
|
TAG="${GITHUB_REF#refs/tags/}"
|
||||||
|
|
||||||
|
if [ "${GITHUB_REF}" != "${GITHUB_REF#refs/tags/}" ]; then
|
||||||
|
echo "Pushing tag: ${TAG}"
|
||||||
|
git push github "refs/tags/${TAG}" --force
|
||||||
|
else
|
||||||
|
echo "Pushing branch: ${BRANCH}"
|
||||||
|
git push github "HEAD:refs/heads/${BRANCH}" --force
|
||||||
|
fi
|
||||||
25
.gitignore
vendored
25
.gitignore
vendored
@@ -4,3 +4,28 @@
|
|||||||
*.swp
|
*.swp
|
||||||
*.swo
|
*.swo
|
||||||
*~
|
*~
|
||||||
|
|
||||||
|
# Logs
|
||||||
|
logs
|
||||||
|
*.log
|
||||||
|
npm-debug.log*
|
||||||
|
yarn-debug.log*
|
||||||
|
yarn-error.log*
|
||||||
|
dev-debug.log
|
||||||
|
# Dependency directories
|
||||||
|
node_modules/
|
||||||
|
# Environment variables
|
||||||
|
.env
|
||||||
|
# Editor directories and files
|
||||||
|
.idea
|
||||||
|
.vscode
|
||||||
|
*.suo
|
||||||
|
*.ntvs*
|
||||||
|
*.njsproj
|
||||||
|
*.sln
|
||||||
|
*.sw?
|
||||||
|
# OS specific
|
||||||
|
|
||||||
|
# Taskmaster (local workflow tool)
|
||||||
|
.taskmaster/
|
||||||
|
.env.example
|
||||||
|
|||||||
246
Cargo.lock
generated
246
Cargo.lock
generated
@@ -297,6 +297,12 @@ dependencies = [
|
|||||||
"tower-service",
|
"tower-service",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "base16ct"
|
||||||
|
version = "0.2.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "4c7f02d4ea65f2c1853089ffd8d2787bdbc63de2f0d29dedbcf8ccdfa0ccd4cf"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "base64"
|
name = "base64"
|
||||||
version = "0.22.1"
|
version = "0.22.1"
|
||||||
@@ -467,6 +473,7 @@ dependencies = [
|
|||||||
"iana-time-zone",
|
"iana-time-zone",
|
||||||
"js-sys",
|
"js-sys",
|
||||||
"num-traits",
|
"num-traits",
|
||||||
|
"serde",
|
||||||
"wasm-bindgen",
|
"wasm-bindgen",
|
||||||
"windows-link",
|
"windows-link",
|
||||||
]
|
]
|
||||||
@@ -627,6 +634,24 @@ dependencies = [
|
|||||||
"libc",
|
"libc",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "crunchy"
|
||||||
|
version = "0.2.4"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "460fbee9c2c2f33933d720630a6a0bac33ba7053db5344fac858d4b8952d77d5"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "crypto-bigint"
|
||||||
|
version = "0.5.5"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "0dc92fb57ca44df6db8059111ab3af99a63d5d0f8375d9972e319a379c6bab76"
|
||||||
|
dependencies = [
|
||||||
|
"generic-array",
|
||||||
|
"rand_core 0.6.4",
|
||||||
|
"subtle",
|
||||||
|
"zeroize",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "crypto-common"
|
name = "crypto-common"
|
||||||
version = "0.1.7"
|
version = "0.1.7"
|
||||||
@@ -650,6 +675,7 @@ dependencies = [
|
|||||||
"digest",
|
"digest",
|
||||||
"fiat-crypto",
|
"fiat-crypto",
|
||||||
"rustc_version",
|
"rustc_version",
|
||||||
|
"serde",
|
||||||
"subtle",
|
"subtle",
|
||||||
"zeroize",
|
"zeroize",
|
||||||
]
|
]
|
||||||
@@ -816,10 +842,32 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
|
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"block-buffer",
|
"block-buffer",
|
||||||
|
"const-oid",
|
||||||
"crypto-common",
|
"crypto-common",
|
||||||
"subtle",
|
"subtle",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "dirs"
|
||||||
|
version = "6.0.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "c3e8aa94d75141228480295a7d0e7feb620b1a5ad9f12bc40be62411e38cce4e"
|
||||||
|
dependencies = [
|
||||||
|
"dirs-sys",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "dirs-sys"
|
||||||
|
version = "0.5.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "e01a3366d27ee9890022452ee61b2b63a67e6f13f58900b651ff5665f0bb1fab"
|
||||||
|
dependencies = [
|
||||||
|
"libc",
|
||||||
|
"option-ext",
|
||||||
|
"redox_users",
|
||||||
|
"windows-sys 0.61.2",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "displaydoc"
|
name = "displaydoc"
|
||||||
version = "0.2.5"
|
version = "0.2.5"
|
||||||
@@ -850,6 +898,21 @@ dependencies = [
|
|||||||
"rustfft",
|
"rustfft",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "ecdsa"
|
||||||
|
version = "0.16.9"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "ee27f32b5c5292967d2d4a9d7f1e0b0aed2c15daded5a60300e4abb9d8020bca"
|
||||||
|
dependencies = [
|
||||||
|
"der",
|
||||||
|
"digest",
|
||||||
|
"elliptic-curve",
|
||||||
|
"rfc6979",
|
||||||
|
"serdect",
|
||||||
|
"signature",
|
||||||
|
"spki",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ed25519"
|
name = "ed25519"
|
||||||
version = "2.2.3"
|
version = "2.2.3"
|
||||||
@@ -857,6 +920,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "115531babc129696a58c64a4fef0a8bf9e9698629fb97e9e40767d235cfbcd53"
|
checksum = "115531babc129696a58c64a4fef0a8bf9e9698629fb97e9e40767d235cfbcd53"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"pkcs8",
|
"pkcs8",
|
||||||
|
"serde",
|
||||||
"signature",
|
"signature",
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -881,6 +945,26 @@ version = "1.15.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
|
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "elliptic-curve"
|
||||||
|
version = "0.13.8"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "b5e6043086bf7973472e0c7dff2142ea0b680d30e18d9cc40f267efbf222bd47"
|
||||||
|
dependencies = [
|
||||||
|
"base16ct",
|
||||||
|
"crypto-bigint",
|
||||||
|
"digest",
|
||||||
|
"ff",
|
||||||
|
"generic-array",
|
||||||
|
"group",
|
||||||
|
"pkcs8",
|
||||||
|
"rand_core 0.6.4",
|
||||||
|
"sec1",
|
||||||
|
"serdect",
|
||||||
|
"subtle",
|
||||||
|
"zeroize",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "encoding_rs"
|
name = "encoding_rs"
|
||||||
version = "0.8.35"
|
version = "0.8.35"
|
||||||
@@ -924,6 +1008,16 @@ version = "2.3.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
|
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "ff"
|
||||||
|
version = "0.13.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "c0b50bfb653653f9ca9095b427bed08ab8d75a137839d9ad64eb11810d5b6393"
|
||||||
|
dependencies = [
|
||||||
|
"rand_core 0.6.4",
|
||||||
|
"subtle",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "fiat-crypto"
|
name = "fiat-crypto"
|
||||||
version = "0.2.9"
|
version = "0.2.9"
|
||||||
@@ -1084,6 +1178,7 @@ checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
|
|||||||
dependencies = [
|
dependencies = [
|
||||||
"typenum",
|
"typenum",
|
||||||
"version_check",
|
"version_check",
|
||||||
|
"zeroize",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -1143,6 +1238,17 @@ version = "0.3.3"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "0cc23270f6e1808e30a928bdc84dea0b9b4136a8bc82338574f23baf47bbd280"
|
checksum = "0cc23270f6e1808e30a928bdc84dea0b9b4136a8bc82338574f23baf47bbd280"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "group"
|
||||||
|
version = "0.13.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "f0f9ef7462f7c099f518d754361858f86d8a07af53ba9af0fe635bbccb151a63"
|
||||||
|
dependencies = [
|
||||||
|
"ff",
|
||||||
|
"rand_core 0.6.4",
|
||||||
|
"subtle",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "h2"
|
name = "h2"
|
||||||
version = "0.4.13"
|
version = "0.4.13"
|
||||||
@@ -1626,6 +1732,21 @@ dependencies = [
|
|||||||
"wasm-bindgen",
|
"wasm-bindgen",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "k256"
|
||||||
|
version = "0.13.4"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "f6e3919bbaa2945715f0bb6d3934a173d1e9a59ac23767fbaaef277265a7411b"
|
||||||
|
dependencies = [
|
||||||
|
"cfg-if",
|
||||||
|
"ecdsa",
|
||||||
|
"elliptic-curve",
|
||||||
|
"once_cell",
|
||||||
|
"serdect",
|
||||||
|
"sha2",
|
||||||
|
"signature",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lazy_static"
|
name = "lazy_static"
|
||||||
version = "1.5.0"
|
version = "1.5.0"
|
||||||
@@ -1660,6 +1781,15 @@ version = "0.2.16"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "b6d2cec3eae94f9f509c767b45932f1ada8350c4bdb85af2fcab4a3c14807981"
|
checksum = "b6d2cec3eae94f9f509c767b45932f1ada8350c4bdb85af2fcab4a3c14807981"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "libredox"
|
||||||
|
version = "0.1.15"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7ddbf48fd451246b1f8c2610bd3b4ac0cc6e149d89832867093ab69a17194f08"
|
||||||
|
dependencies = [
|
||||||
|
"libc",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "linux-raw-sys"
|
name = "linux-raw-sys"
|
||||||
version = "0.12.1"
|
version = "0.12.1"
|
||||||
@@ -1702,6 +1832,15 @@ dependencies = [
|
|||||||
"libc",
|
"libc",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "matchers"
|
||||||
|
version = "0.2.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9"
|
||||||
|
dependencies = [
|
||||||
|
"regex-automata",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "matchit"
|
name = "matchit"
|
||||||
version = "0.7.3"
|
version = "0.7.3"
|
||||||
@@ -1980,6 +2119,12 @@ dependencies = [
|
|||||||
"vcpkg",
|
"vcpkg",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "option-ext"
|
||||||
|
version = "0.2.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "os_str_bytes"
|
name = "os_str_bytes"
|
||||||
version = "6.6.1"
|
version = "6.6.1"
|
||||||
@@ -2320,6 +2465,17 @@ dependencies = [
|
|||||||
"bitflags 2.11.0",
|
"bitflags 2.11.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "redox_users"
|
||||||
|
version = "0.5.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "a4e608c6638b9c18977b00b475ac1f28d14e84b27d8d42f70e0bf1e3dec127ac"
|
||||||
|
dependencies = [
|
||||||
|
"getrandom 0.2.17",
|
||||||
|
"libredox",
|
||||||
|
"thiserror 2.0.18",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "regex"
|
name = "regex"
|
||||||
version = "1.12.3"
|
version = "1.12.3"
|
||||||
@@ -2389,6 +2545,16 @@ dependencies = [
|
|||||||
"web-sys",
|
"web-sys",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "rfc6979"
|
||||||
|
version = "0.4.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "f8dd2a808d456c4a54e300a23e9f5a67e122c3024119acbfd73e3bf664491cb2"
|
||||||
|
dependencies = [
|
||||||
|
"hmac",
|
||||||
|
"subtle",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ring"
|
name = "ring"
|
||||||
version = "0.17.14"
|
version = "0.17.14"
|
||||||
@@ -2567,6 +2733,21 @@ version = "1.2.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "sec1"
|
||||||
|
version = "0.7.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "d3e97a565f76233a6003f9f5c54be1d9c5bdfa3eccfb189469f11ec4901c47dc"
|
||||||
|
dependencies = [
|
||||||
|
"base16ct",
|
||||||
|
"der",
|
||||||
|
"generic-array",
|
||||||
|
"pkcs8",
|
||||||
|
"serdect",
|
||||||
|
"subtle",
|
||||||
|
"zeroize",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "security-framework"
|
name = "security-framework"
|
||||||
version = "3.7.0"
|
version = "3.7.0"
|
||||||
@@ -2671,6 +2852,16 @@ dependencies = [
|
|||||||
"serde",
|
"serde",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "serdect"
|
||||||
|
version = "0.2.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "a84f14a19e9a014bb9f4512488d9829a68e04ecabffb0f9904cd1ace94598177"
|
||||||
|
dependencies = [
|
||||||
|
"base16ct",
|
||||||
|
"serde",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "sha1"
|
name = "sha1"
|
||||||
version = "0.10.6"
|
version = "0.10.6"
|
||||||
@@ -2724,6 +2915,7 @@ version = "2.2.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de"
|
checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
|
"digest",
|
||||||
"rand_core 0.6.4",
|
"rand_core 0.6.4",
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -2937,6 +3129,15 @@ version = "0.1.8"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "7694e1cfe791f8d31026952abf09c69ca6f6fa4e1a1229e18988f06a04a12dca"
|
checksum = "7694e1cfe791f8d31026952abf09c69ca6f6fa4e1a1229e18988f06a04a12dca"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "tiny-keccak"
|
||||||
|
version = "2.0.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "2c9d3793400a45f954c52e73d068316d76b6f4e36977e3fcebb13a2721e80237"
|
||||||
|
dependencies = [
|
||||||
|
"crunchy",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "tinystr"
|
name = "tinystr"
|
||||||
version = "0.8.2"
|
version = "0.8.2"
|
||||||
@@ -3235,10 +3436,14 @@ version = "0.3.23"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319"
|
checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
|
"matchers",
|
||||||
"nu-ansi-term",
|
"nu-ansi-term",
|
||||||
|
"once_cell",
|
||||||
|
"regex-automata",
|
||||||
"sharded-slab",
|
"sharded-slab",
|
||||||
"smallvec",
|
"smallvec",
|
||||||
"thread_local",
|
"thread_local",
|
||||||
|
"tracing",
|
||||||
"tracing-core",
|
"tracing-core",
|
||||||
"tracing-log",
|
"tracing-log",
|
||||||
]
|
]
|
||||||
@@ -3367,6 +3572,18 @@ version = "1.0.4"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "uuid"
|
||||||
|
version = "1.23.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "5ac8b6f42ead25368cf5b098aeb3dc8a1a2c05a3eee8a9a1a68c640edbfc79d9"
|
||||||
|
dependencies = [
|
||||||
|
"getrandom 0.4.2",
|
||||||
|
"js-sys",
|
||||||
|
"serde_core",
|
||||||
|
"wasm-bindgen",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "valuable"
|
name = "valuable"
|
||||||
version = "0.1.1"
|
version = "0.1.1"
|
||||||
@@ -3406,7 +3623,28 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "warzone-protocol"
|
name = "warzone-protocol"
|
||||||
version = "0.1.0"
|
version = "0.0.38"
|
||||||
|
dependencies = [
|
||||||
|
"base64",
|
||||||
|
"bincode",
|
||||||
|
"bip39",
|
||||||
|
"chacha20poly1305",
|
||||||
|
"chrono",
|
||||||
|
"curve25519-dalek",
|
||||||
|
"ed25519-dalek",
|
||||||
|
"hex",
|
||||||
|
"hkdf",
|
||||||
|
"k256",
|
||||||
|
"rand 0.8.5",
|
||||||
|
"serde",
|
||||||
|
"serde_json",
|
||||||
|
"sha2",
|
||||||
|
"thiserror 2.0.18",
|
||||||
|
"tiny-keccak",
|
||||||
|
"uuid",
|
||||||
|
"x25519-dalek",
|
||||||
|
"zeroize",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "wasi"
|
name = "wasi"
|
||||||
@@ -4132,6 +4370,8 @@ dependencies = [
|
|||||||
"async-trait",
|
"async-trait",
|
||||||
"axum 0.7.9",
|
"axum 0.7.9",
|
||||||
"bytes",
|
"bytes",
|
||||||
|
"chrono",
|
||||||
|
"dirs",
|
||||||
"futures-util",
|
"futures-util",
|
||||||
"prometheus",
|
"prometheus",
|
||||||
"quinn",
|
"quinn",
|
||||||
@@ -4139,6 +4379,7 @@ dependencies = [
|
|||||||
"rustls",
|
"rustls",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
|
"sha2",
|
||||||
"tokio",
|
"tokio",
|
||||||
"toml",
|
"toml",
|
||||||
"tower-http",
|
"tower-http",
|
||||||
@@ -4158,10 +4399,13 @@ version = "0.1.0"
|
|||||||
dependencies = [
|
dependencies = [
|
||||||
"async-trait",
|
"async-trait",
|
||||||
"bytes",
|
"bytes",
|
||||||
|
"ed25519-dalek",
|
||||||
|
"hkdf",
|
||||||
"quinn",
|
"quinn",
|
||||||
"rcgen",
|
"rcgen",
|
||||||
"rustls",
|
"rustls",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
|
"sha2",
|
||||||
"tokio",
|
"tokio",
|
||||||
"tracing",
|
"tracing",
|
||||||
"wzp-proto",
|
"wzp-proto",
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ codec2 = "0.3"
|
|||||||
|
|
||||||
# Crypto
|
# Crypto
|
||||||
x25519-dalek = { version = "2", features = ["static_secrets"] }
|
x25519-dalek = { version = "2", features = ["static_secrets"] }
|
||||||
ed25519-dalek = { version = "2", features = ["rand_core"] }
|
ed25519-dalek = { version = "2", features = ["rand_core", "pkcs8"] }
|
||||||
chacha20poly1305 = "0.10"
|
chacha20poly1305 = "0.10"
|
||||||
hkdf = "0.12"
|
hkdf = "0.12"
|
||||||
sha2 = "0.10"
|
sha2 = "0.10"
|
||||||
|
|||||||
@@ -19,6 +19,8 @@ import java.io.FileOutputStream
|
|||||||
import java.io.OutputStreamWriter
|
import java.io.OutputStreamWriter
|
||||||
import java.nio.ByteBuffer
|
import java.nio.ByteBuffer
|
||||||
import java.nio.ByteOrder
|
import java.nio.ByteOrder
|
||||||
|
import java.util.concurrent.CountDownLatch
|
||||||
|
import java.util.concurrent.TimeUnit
|
||||||
import kotlin.math.pow
|
import kotlin.math.pow
|
||||||
import kotlin.math.sqrt
|
import kotlin.math.sqrt
|
||||||
|
|
||||||
@@ -55,10 +57,23 @@ class AudioPipeline(private val context: Context) {
|
|||||||
/** Whether to attach hardware AEC. Must be set before start(). */
|
/** Whether to attach hardware AEC. Must be set before start(). */
|
||||||
var aecEnabled: Boolean = true
|
var aecEnabled: Boolean = true
|
||||||
/** Enable debug recording of PCM + RMS histogram to cache dir. */
|
/** Enable debug recording of PCM + RMS histogram to cache dir. */
|
||||||
var debugRecording: Boolean = true
|
var debugRecording: Boolean = false
|
||||||
private var captureThread: Thread? = null
|
private var captureThread: Thread? = null
|
||||||
private var playoutThread: Thread? = null
|
private var playoutThread: Thread? = null
|
||||||
|
|
||||||
|
// DirectByteBuffers for zero-copy JNI audio transfer.
|
||||||
|
// Allocated as class fields (NOT locals) because ART's JIT OSR
|
||||||
|
// can null local variables when it replaces the stack frame mid-loop.
|
||||||
|
// These survive OSR because they're on the heap.
|
||||||
|
private val captureDirectBuf: ByteBuffer =
|
||||||
|
ByteBuffer.allocateDirect(FRAME_SAMPLES * 2).order(ByteOrder.LITTLE_ENDIAN)
|
||||||
|
private val playoutDirectBuf: ByteBuffer =
|
||||||
|
ByteBuffer.allocateDirect(FRAME_SAMPLES * 2).order(ByteOrder.LITTLE_ENDIAN)
|
||||||
|
|
||||||
|
/** Latch counted down by each audio thread after exiting its loop.
|
||||||
|
* stop() does NOT wait on this — teardown waits via awaitDrain(). */
|
||||||
|
private var drainLatch: CountDownLatch? = null
|
||||||
|
|
||||||
private val debugDir: File by lazy {
|
private val debugDir: File by lazy {
|
||||||
File(context.cacheDir, "wzp_debug").also { it.mkdirs() }
|
File(context.cacheDir, "wzp_debug").also { it.mkdirs() }
|
||||||
}
|
}
|
||||||
@@ -66,9 +81,11 @@ class AudioPipeline(private val context: Context) {
|
|||||||
fun start(engine: WzpEngine) {
|
fun start(engine: WzpEngine) {
|
||||||
if (running) return
|
if (running) return
|
||||||
running = true
|
running = true
|
||||||
|
drainLatch = CountDownLatch(2) // one for capture, one for playout
|
||||||
|
|
||||||
captureThread = Thread({
|
captureThread = Thread({
|
||||||
runCapture(engine)
|
runCapture(engine)
|
||||||
|
drainLatch?.countDown() // signal: capture loop exited, no more JNI calls
|
||||||
// Park thread forever — exiting triggers a libcrypto TLS destructor
|
// Park thread forever — exiting triggers a libcrypto TLS destructor
|
||||||
// crash (SIGSEGV in OPENSSL_free) on Android when a JNI-calling thread exits.
|
// crash (SIGSEGV in OPENSSL_free) on Android when a JNI-calling thread exits.
|
||||||
parkThread()
|
parkThread()
|
||||||
@@ -80,6 +97,7 @@ class AudioPipeline(private val context: Context) {
|
|||||||
|
|
||||||
playoutThread = Thread({
|
playoutThread = Thread({
|
||||||
runPlayout(engine)
|
runPlayout(engine)
|
||||||
|
drainLatch?.countDown() // signal: playout loop exited
|
||||||
parkThread()
|
parkThread()
|
||||||
}, "wzp-playout").apply {
|
}, "wzp-playout").apply {
|
||||||
isDaemon = true
|
isDaemon = true
|
||||||
@@ -92,10 +110,20 @@ class AudioPipeline(private val context: Context) {
|
|||||||
|
|
||||||
fun stop() {
|
fun stop() {
|
||||||
running = false
|
running = false
|
||||||
// Don't join — threads are parked as daemons to avoid native TLS crash
|
// Don't join threads — they are parked as daemons to avoid native TLS crash.
|
||||||
|
// Don't null thread refs or drainLatch — teardown() needs awaitDrain().
|
||||||
|
Log.i(TAG, "audio pipeline stopped (running=false)")
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Block until both audio threads have exited their loops (max 200ms).
|
||||||
|
* After this returns, no more JNI calls to the engine will be made. */
|
||||||
|
fun awaitDrain(): Boolean {
|
||||||
|
val ok = drainLatch?.await(200, TimeUnit.MILLISECONDS) ?: true
|
||||||
|
if (!ok) Log.w(TAG, "awaitDrain: audio threads did not drain in 200ms")
|
||||||
captureThread = null
|
captureThread = null
|
||||||
playoutThread = null
|
playoutThread = null
|
||||||
Log.i(TAG, "audio pipeline stopped")
|
drainLatch = null
|
||||||
|
return ok
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun applyGain(pcm: ShortArray, count: Int, db: Float) {
|
private fun applyGain(pcm: ShortArray, count: Int, db: Float) {
|
||||||
@@ -206,7 +234,10 @@ class AudioPipeline(private val context: Context) {
|
|||||||
val read = recorder.read(pcm, 0, FRAME_SAMPLES)
|
val read = recorder.read(pcm, 0, FRAME_SAMPLES)
|
||||||
if (read > 0) {
|
if (read > 0) {
|
||||||
applyGain(pcm, read, captureGainDb)
|
applyGain(pcm, read, captureGainDb)
|
||||||
engine.writeAudio(pcm)
|
// Zero-copy write via DirectByteBuffer (class field, survives JIT OSR)
|
||||||
|
captureDirectBuf.clear()
|
||||||
|
captureDirectBuf.asShortBuffer().put(pcm, 0, read)
|
||||||
|
engine.writeAudioDirect(captureDirectBuf, read)
|
||||||
|
|
||||||
// Debug: write raw PCM + RMS
|
// Debug: write raw PCM + RMS
|
||||||
if (pcmOut != null) {
|
if (pcmOut != null) {
|
||||||
@@ -285,8 +316,12 @@ class AudioPipeline(private val context: Context) {
|
|||||||
}
|
}
|
||||||
try {
|
try {
|
||||||
while (running) {
|
while (running) {
|
||||||
val read = engine.readAudio(pcm)
|
// Zero-copy read via DirectByteBuffer (class field, survives JIT OSR)
|
||||||
|
playoutDirectBuf.clear()
|
||||||
|
val read = engine.readAudioDirect(playoutDirectBuf, FRAME_SAMPLES)
|
||||||
if (read >= FRAME_SAMPLES) {
|
if (read >= FRAME_SAMPLES) {
|
||||||
|
playoutDirectBuf.rewind()
|
||||||
|
playoutDirectBuf.asShortBuffer().get(pcm, 0, read)
|
||||||
applyGain(pcm, read, playoutGainDb)
|
applyGain(pcm, read, playoutGainDb)
|
||||||
track.write(pcm, 0, read)
|
track.write(pcm, 0, read)
|
||||||
|
|
||||||
|
|||||||
@@ -28,6 +28,9 @@ class SettingsRepository(context: Context) {
|
|||||||
private const val KEY_PREFER_IPV6 = "prefer_ipv6"
|
private const val KEY_PREFER_IPV6 = "prefer_ipv6"
|
||||||
private const val KEY_IDENTITY_SEED = "identity_seed_hex"
|
private const val KEY_IDENTITY_SEED = "identity_seed_hex"
|
||||||
private const val KEY_AEC_ENABLED = "aec_enabled"
|
private const val KEY_AEC_ENABLED = "aec_enabled"
|
||||||
|
private const val KEY_DEBUG_RECORDING = "debug_recording"
|
||||||
|
private const val KEY_RECENT_ROOMS = "recent_rooms"
|
||||||
|
private const val TOFU_PREFIX = "tofu_"
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Servers ---
|
// --- Servers ---
|
||||||
@@ -118,6 +121,16 @@ class SettingsRepository(context: Context) {
|
|||||||
fun saveAecEnabled(enabled: Boolean) { prefs.edit().putBoolean(KEY_AEC_ENABLED, enabled).apply() }
|
fun saveAecEnabled(enabled: Boolean) { prefs.edit().putBoolean(KEY_AEC_ENABLED, enabled).apply() }
|
||||||
fun loadAecEnabled(): Boolean = prefs.getBoolean(KEY_AEC_ENABLED, true)
|
fun loadAecEnabled(): Boolean = prefs.getBoolean(KEY_AEC_ENABLED, true)
|
||||||
|
|
||||||
|
// --- Debug recording ---
|
||||||
|
|
||||||
|
fun saveDebugRecording(enabled: Boolean) { prefs.edit().putBoolean(KEY_DEBUG_RECORDING, enabled).apply() }
|
||||||
|
fun loadDebugRecording(): Boolean = prefs.getBoolean(KEY_DEBUG_RECORDING, false)
|
||||||
|
|
||||||
|
// --- Codec choice ---
|
||||||
|
// 0 = Opus (GOOD), 1 = Opus Low (DEGRADED), 2 = Codec2 (CATASTROPHIC)
|
||||||
|
fun saveCodecChoice(choice: Int) { prefs.edit().putInt("codec_choice", choice).apply() }
|
||||||
|
fun loadCodecChoice(): Int = prefs.getInt("codec_choice", 0)
|
||||||
|
|
||||||
// --- Identity seed ---
|
// --- Identity seed ---
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -138,4 +151,53 @@ class SettingsRepository(context: Context) {
|
|||||||
fun saveSeedHex(hex: String) {
|
fun saveSeedHex(hex: String) {
|
||||||
prefs.edit().putString(KEY_IDENTITY_SEED, hex).apply()
|
prefs.edit().putString(KEY_IDENTITY_SEED, hex).apply()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- Recent rooms ---
|
||||||
|
|
||||||
|
data class RecentRoom(val relay: String, val room: String)
|
||||||
|
|
||||||
|
fun addRecentRoom(relay: String, room: String) {
|
||||||
|
val rooms = loadRecentRooms().toMutableList()
|
||||||
|
rooms.removeAll { it.relay == relay && it.room == room }
|
||||||
|
rooms.add(0, RecentRoom(relay, room))
|
||||||
|
if (rooms.size > 5) rooms.subList(5, rooms.size).clear()
|
||||||
|
val arr = JSONArray()
|
||||||
|
rooms.forEach { arr.put(JSONObject().apply { put("relay", it.relay); put("room", it.room) }) }
|
||||||
|
prefs.edit().putString(KEY_RECENT_ROOMS, arr.toString()).apply()
|
||||||
|
}
|
||||||
|
|
||||||
|
fun loadRecentRooms(): List<RecentRoom> {
|
||||||
|
val json = prefs.getString(KEY_RECENT_ROOMS, null) ?: return emptyList()
|
||||||
|
return try {
|
||||||
|
val arr = JSONArray(json)
|
||||||
|
(0 until arr.length()).map { i ->
|
||||||
|
val o = arr.getJSONObject(i)
|
||||||
|
RecentRoom(o.getString("relay"), o.getString("room"))
|
||||||
|
}
|
||||||
|
} catch (_: Exception) { emptyList() }
|
||||||
|
}
|
||||||
|
|
||||||
|
fun clearRecentRooms() {
|
||||||
|
prefs.edit().remove(KEY_RECENT_ROOMS).apply()
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Server fingerprint TOFU ---
|
||||||
|
|
||||||
|
fun saveServerFingerprint(address: String, fingerprint: String) {
|
||||||
|
prefs.edit().putString("$TOFU_PREFIX$address", fingerprint).apply()
|
||||||
|
}
|
||||||
|
|
||||||
|
fun loadServerFingerprint(address: String): String? {
|
||||||
|
return prefs.getString("$TOFU_PREFIX$address", null)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Ping RTT cache ---
|
||||||
|
|
||||||
|
fun savePingRtt(address: String, rttMs: Int) {
|
||||||
|
prefs.edit().putInt("ping_rtt_$address", rttMs).apply()
|
||||||
|
}
|
||||||
|
|
||||||
|
fun loadPingRtt(address: String): Int {
|
||||||
|
return prefs.getInt("ping_rtt_$address", -1)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,10 +33,24 @@ data class CallStats(
|
|||||||
val fecRecovered: Long = 0,
|
val fecRecovered: Long = 0,
|
||||||
/** Current mic audio level (RMS, 0-32767). */
|
/** Current mic audio level (RMS, 0-32767). */
|
||||||
val audioLevel: Int = 0,
|
val audioLevel: Int = 0,
|
||||||
|
/** Our current outgoing codec (e.g. "Opus24k"). */
|
||||||
|
val currentCodec: String = "",
|
||||||
|
/** Last seen incoming codec from peers. */
|
||||||
|
val peerCodec: String = "",
|
||||||
|
/** Whether auto quality mode is active. */
|
||||||
|
val autoMode: Boolean = false,
|
||||||
/** Number of participants in the room. */
|
/** Number of participants in the room. */
|
||||||
val roomParticipantCount: Int = 0,
|
val roomParticipantCount: Int = 0,
|
||||||
/** Participants in the room (fingerprint + optional alias). */
|
/** Participants in the room (fingerprint + optional alias). */
|
||||||
val roomParticipants: List<RoomMember> = emptyList(),
|
val roomParticipants: List<RoomMember> = emptyList(),
|
||||||
|
/** SAS verification code (4-digit, null if not in a call). */
|
||||||
|
val sasCode: Int? = null,
|
||||||
|
/** Incoming call ID (or "relay|room" for CallSetup). */
|
||||||
|
val incomingCallId: String? = null,
|
||||||
|
/** Incoming caller's fingerprint. */
|
||||||
|
val incomingCallerFp: String? = null,
|
||||||
|
/** Incoming caller's alias. */
|
||||||
|
val incomingCallerAlias: String? = null,
|
||||||
) {
|
) {
|
||||||
/** Human-readable quality label. */
|
/** Human-readable quality label. */
|
||||||
val qualityLabel: String
|
val qualityLabel: String
|
||||||
@@ -54,7 +68,8 @@ data class CallStats(
|
|||||||
val o = arr.getJSONObject(i)
|
val o = arr.getJSONObject(i)
|
||||||
RoomMember(
|
RoomMember(
|
||||||
fingerprint = o.optString("fingerprint", ""),
|
fingerprint = o.optString("fingerprint", ""),
|
||||||
alias = if (o.isNull("alias")) null else o.optString("alias", null)
|
alias = if (o.isNull("alias")) null else o.optString("alias", null),
|
||||||
|
relayLabel = if (o.isNull("relay_label")) null else o.optString("relay_label", null)
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -76,8 +91,15 @@ data class CallStats(
|
|||||||
underruns = obj.optLong("underruns", 0),
|
underruns = obj.optLong("underruns", 0),
|
||||||
fecRecovered = obj.optLong("fec_recovered", 0),
|
fecRecovered = obj.optLong("fec_recovered", 0),
|
||||||
audioLevel = obj.optInt("audio_level", 0),
|
audioLevel = obj.optInt("audio_level", 0),
|
||||||
|
currentCodec = obj.optString("current_codec", ""),
|
||||||
|
peerCodec = obj.optString("peer_codec", ""),
|
||||||
|
autoMode = obj.optBoolean("auto_mode", false),
|
||||||
roomParticipantCount = obj.optInt("room_participant_count", 0),
|
roomParticipantCount = obj.optInt("room_participant_count", 0),
|
||||||
roomParticipants = parseParticipants(obj.optJSONArray("room_participants"))
|
roomParticipants = parseParticipants(obj.optJSONArray("room_participants")),
|
||||||
|
sasCode = if (obj.has("sas_code")) obj.optInt("sas_code") else null,
|
||||||
|
incomingCallId = if (obj.isNull("incoming_call_id")) null else obj.optString("incoming_call_id", null),
|
||||||
|
incomingCallerFp = if (obj.isNull("incoming_caller_fp")) null else obj.optString("incoming_caller_fp", null),
|
||||||
|
incomingCallerAlias = if (obj.isNull("incoming_caller_alias")) null else obj.optString("incoming_caller_alias", null),
|
||||||
)
|
)
|
||||||
} catch (e: Exception) {
|
} catch (e: Exception) {
|
||||||
CallStats()
|
CallStats()
|
||||||
@@ -88,7 +110,8 @@ data class CallStats(
|
|||||||
|
|
||||||
data class RoomMember(
|
data class RoomMember(
|
||||||
val fingerprint: String,
|
val fingerprint: String,
|
||||||
val alias: String? = null
|
val alias: String? = null,
|
||||||
|
val relayLabel: String? = null
|
||||||
) {
|
) {
|
||||||
/** Short display name: alias if set, otherwise first 8 chars of fingerprint. */
|
/** Short display name: alias if set, otherwise first 8 chars of fingerprint. */
|
||||||
val displayName: String
|
val displayName: String
|
||||||
|
|||||||
97
android/app/src/main/java/com/wzp/engine/SignalManager.kt
Normal file
97
android/app/src/main/java/com/wzp/engine/SignalManager.kt
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
package com.wzp.engine
|
||||||
|
|
||||||
|
import org.json.JSONObject
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Persistent signal connection for direct 1:1 calls.
|
||||||
|
* Separate from WzpEngine — survives across calls.
|
||||||
|
*
|
||||||
|
* Lifecycle: connect() → [placeCall/answerCall] → destroy()
|
||||||
|
*/
|
||||||
|
class SignalManager {
|
||||||
|
|
||||||
|
private var handle: Long = 0L
|
||||||
|
|
||||||
|
val isConnected: Boolean get() = handle != 0L
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Connect to relay and register for direct calls.
|
||||||
|
* MUST be called from a thread with sufficient stack (8MB).
|
||||||
|
* Blocks briefly during QUIC connect + register, then returns.
|
||||||
|
*/
|
||||||
|
fun connect(relay: String, seedHex: String): Boolean {
|
||||||
|
if (handle != 0L) return true // already connected
|
||||||
|
handle = nativeSignalConnect(relay, seedHex)
|
||||||
|
return handle != 0L
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Get current signal state as parsed object. Non-blocking. */
|
||||||
|
fun getState(): SignalState {
|
||||||
|
if (handle == 0L) return SignalState()
|
||||||
|
val json = nativeSignalGetState(handle) ?: return SignalState()
|
||||||
|
return try {
|
||||||
|
val obj = JSONObject(json)
|
||||||
|
SignalState(
|
||||||
|
status = obj.optString("status", "idle"),
|
||||||
|
fingerprint = obj.optString("fingerprint", ""),
|
||||||
|
incomingCallId = if (obj.isNull("incoming_call_id")) null else obj.optString("incoming_call_id"),
|
||||||
|
incomingCallerFp = if (obj.isNull("incoming_caller_fp")) null else obj.optString("incoming_caller_fp"),
|
||||||
|
incomingCallerAlias = if (obj.isNull("incoming_caller_alias")) null else obj.optString("incoming_caller_alias"),
|
||||||
|
callSetupRelay = if (obj.isNull("call_setup_relay")) null else obj.optString("call_setup_relay"),
|
||||||
|
callSetupRoom = if (obj.isNull("call_setup_room")) null else obj.optString("call_setup_room"),
|
||||||
|
callSetupId = if (obj.isNull("call_setup_id")) null else obj.optString("call_setup_id"),
|
||||||
|
)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
SignalState()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Place a direct call to a target fingerprint. */
|
||||||
|
fun placeCall(targetFp: String): Int {
|
||||||
|
if (handle == 0L) return -1
|
||||||
|
return nativeSignalPlaceCall(handle, targetFp)
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Answer an incoming call. mode: 0=Reject, 1=AcceptTrusted, 2=AcceptGeneric */
|
||||||
|
fun answerCall(callId: String, mode: Int = 2): Int {
|
||||||
|
if (handle == 0L) return -1
|
||||||
|
return nativeSignalAnswerCall(handle, callId, mode)
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Send hangup signal. */
|
||||||
|
fun hangup() {
|
||||||
|
if (handle != 0L) nativeSignalHangup(handle)
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Destroy the signal manager. */
|
||||||
|
fun destroy() {
|
||||||
|
if (handle != 0L) {
|
||||||
|
nativeSignalDestroy(handle)
|
||||||
|
handle = 0L
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// JNI native methods
|
||||||
|
private external fun nativeSignalConnect(relay: String, seed: String): Long
|
||||||
|
private external fun nativeSignalGetState(handle: Long): String?
|
||||||
|
private external fun nativeSignalPlaceCall(handle: Long, targetFp: String): Int
|
||||||
|
private external fun nativeSignalAnswerCall(handle: Long, callId: String, mode: Int): Int
|
||||||
|
private external fun nativeSignalHangup(handle: Long)
|
||||||
|
private external fun nativeSignalDestroy(handle: Long)
|
||||||
|
|
||||||
|
companion object {
|
||||||
|
init { System.loadLibrary("wzp_android") }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Signal connection state. */
|
||||||
|
data class SignalState(
|
||||||
|
val status: String = "idle",
|
||||||
|
val fingerprint: String = "",
|
||||||
|
val incomingCallId: String? = null,
|
||||||
|
val incomingCallerFp: String? = null,
|
||||||
|
val incomingCallerAlias: String? = null,
|
||||||
|
val callSetupRelay: String? = null,
|
||||||
|
val callSetupRoom: String? = null,
|
||||||
|
val callSetupId: String? = null,
|
||||||
|
)
|
||||||
@@ -38,9 +38,12 @@ class WzpEngine(private val callback: WzpCallback) {
|
|||||||
* @param alias display name sent to relay for room participant list
|
* @param alias display name sent to relay for room participant list
|
||||||
* @return 0 on success, negative error code on failure
|
* @return 0 on success, negative error code on failure
|
||||||
*/
|
*/
|
||||||
fun startCall(relayAddr: String, room: String, seedHex: String = "", token: String = "", alias: String = ""): Int {
|
/**
|
||||||
|
* @param profile 0 = Opus GOOD, 1 = Opus DEGRADED, 2 = Codec2 CATASTROPHIC
|
||||||
|
*/
|
||||||
|
fun startCall(relayAddr: String, room: String, seedHex: String = "", token: String = "", alias: String = "", profile: Int = 0): Int {
|
||||||
check(nativeHandle != 0L) { "Engine not initialized" }
|
check(nativeHandle != 0L) { "Engine not initialized" }
|
||||||
val result = nativeStartCall(nativeHandle, relayAddr, room, seedHex, token, alias)
|
val result = nativeStartCall(nativeHandle, relayAddr, room, seedHex, token, alias, profile)
|
||||||
if (result == 0) {
|
if (result == 0) {
|
||||||
callback.onCallStateChanged(CallStateConstants.CONNECTING)
|
callback.onCallStateChanged(CallStateConstants.CONNECTING)
|
||||||
} else {
|
} else {
|
||||||
@@ -50,6 +53,7 @@ class WzpEngine(private val callback: WzpCallback) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/** Stop the active call. Safe to call when no call is active. */
|
/** Stop the active call. Safe to call when no call is active. */
|
||||||
|
@Synchronized
|
||||||
fun stopCall() {
|
fun stopCall() {
|
||||||
if (nativeHandle != 0L) {
|
if (nativeHandle != 0L) {
|
||||||
nativeStopCall(nativeHandle)
|
nativeStopCall(nativeHandle)
|
||||||
@@ -73,6 +77,7 @@ class WzpEngine(private val callback: WzpCallback) {
|
|||||||
*
|
*
|
||||||
* @return JSON-serialised [CallStats], or `"{}"` if the engine is not initialised.
|
* @return JSON-serialised [CallStats], or `"{}"` if the engine is not initialised.
|
||||||
*/
|
*/
|
||||||
|
@Synchronized
|
||||||
fun getStats(): String {
|
fun getStats(): String {
|
||||||
if (nativeHandle == 0L) return "{}"
|
if (nativeHandle == 0L) return "{}"
|
||||||
return try {
|
return try {
|
||||||
@@ -92,6 +97,7 @@ class WzpEngine(private val callback: WzpCallback) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/** Destroy the native engine and free all resources. The instance must not be reused. */
|
/** Destroy the native engine and free all resources. The instance must not be reused. */
|
||||||
|
@Synchronized
|
||||||
fun destroy() {
|
fun destroy() {
|
||||||
if (nativeHandle != 0L) {
|
if (nativeHandle != 0L) {
|
||||||
nativeDestroy(nativeHandle)
|
nativeDestroy(nativeHandle)
|
||||||
@@ -117,11 +123,31 @@ class WzpEngine(private val callback: WzpCallback) {
|
|||||||
return nativeReadAudio(nativeHandle, pcm)
|
return nativeReadAudio(nativeHandle, pcm)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Write captured PCM from a DirectByteBuffer — zero JNI array copy.
|
||||||
|
* The buffer must be a direct ByteBuffer with native byte order containing i16 samples.
|
||||||
|
* Called from the AudioRecord capture thread.
|
||||||
|
*/
|
||||||
|
fun writeAudioDirect(buffer: java.nio.ByteBuffer, sampleCount: Int): Int {
|
||||||
|
if (nativeHandle == 0L) return 0
|
||||||
|
return nativeWriteAudioDirect(nativeHandle, buffer, sampleCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Read decoded PCM into a DirectByteBuffer — zero JNI array copy.
|
||||||
|
* The buffer must be a direct ByteBuffer with native byte order.
|
||||||
|
* Called from the AudioTrack playout thread.
|
||||||
|
*/
|
||||||
|
fun readAudioDirect(buffer: java.nio.ByteBuffer, maxSamples: Int): Int {
|
||||||
|
if (nativeHandle == 0L) return 0
|
||||||
|
return nativeReadAudioDirect(nativeHandle, buffer, maxSamples)
|
||||||
|
}
|
||||||
|
|
||||||
// -- JNI native methods --------------------------------------------------
|
// -- JNI native methods --------------------------------------------------
|
||||||
|
|
||||||
private external fun nativeInit(): Long
|
private external fun nativeInit(): Long
|
||||||
private external fun nativeStartCall(
|
private external fun nativeStartCall(
|
||||||
handle: Long, relay: String, room: String, seed: String, token: String, alias: String
|
handle: Long, relay: String, room: String, seed: String, token: String, alias: String, profile: Int
|
||||||
): Int
|
): Int
|
||||||
private external fun nativeStopCall(handle: Long)
|
private external fun nativeStopCall(handle: Long)
|
||||||
private external fun nativeSetMute(handle: Long, muted: Boolean)
|
private external fun nativeSetMute(handle: Long, muted: Boolean)
|
||||||
@@ -130,13 +156,70 @@ class WzpEngine(private val callback: WzpCallback) {
|
|||||||
private external fun nativeForceProfile(handle: Long, profile: Int)
|
private external fun nativeForceProfile(handle: Long, profile: Int)
|
||||||
private external fun nativeWriteAudio(handle: Long, pcm: ShortArray): Int
|
private external fun nativeWriteAudio(handle: Long, pcm: ShortArray): Int
|
||||||
private external fun nativeReadAudio(handle: Long, pcm: ShortArray): Int
|
private external fun nativeReadAudio(handle: Long, pcm: ShortArray): Int
|
||||||
|
private external fun nativeWriteAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, sampleCount: Int): Int
|
||||||
|
private external fun nativeReadAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, maxSamples: Int): Int
|
||||||
private external fun nativeDestroy(handle: Long)
|
private external fun nativeDestroy(handle: Long)
|
||||||
|
|
||||||
companion object {
|
companion object {
|
||||||
init {
|
init { System.loadLibrary("wzp_android") }
|
||||||
System.loadLibrary("wzp_android")
|
|
||||||
}
|
/** Get the identity fingerprint for a seed hex. No engine needed. */
|
||||||
|
@JvmStatic
|
||||||
|
private external fun nativeGetFingerprint(seedHex: String): String?
|
||||||
|
|
||||||
|
/** Compute the full identity fingerprint (xxxx:xxxx:...) from a seed hex string. */
|
||||||
|
@JvmStatic
|
||||||
|
fun getFingerprint(seedHex: String): String = nativeGetFingerprint(seedHex) ?: ""
|
||||||
}
|
}
|
||||||
|
private external fun nativePingRelay(handle: Long, relay: String): String?
|
||||||
|
private external fun nativeStartSignaling(handle: Long, relay: String, seed: String, token: String, alias: String): Int
|
||||||
|
private external fun nativePlaceCall(handle: Long, targetFp: String): Int
|
||||||
|
private external fun nativeAnswerCall(handle: Long, callId: String, mode: Int): Int
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ping a relay server. Requires engine to be initialized.
|
||||||
|
* Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or null.
|
||||||
|
*/
|
||||||
|
fun pingRelay(address: String): String? {
|
||||||
|
if (nativeHandle == 0L) return null
|
||||||
|
return nativePingRelay(nativeHandle, address)
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Start persistent signaling connection for direct 1:1 calls.
|
||||||
|
* The engine registers on the relay and listens for incoming calls.
|
||||||
|
* Call state updates are available via [getStats].
|
||||||
|
*
|
||||||
|
* @return 0 on success, -1 on error
|
||||||
|
*/
|
||||||
|
fun startSignaling(relay: String, seed: String = "", token: String = "", alias: String = ""): Int {
|
||||||
|
check(nativeHandle != 0L) { "Engine not initialized" }
|
||||||
|
return nativeStartSignaling(nativeHandle, relay, seed, token, alias)
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Place a direct call to a peer by fingerprint.
|
||||||
|
* Requires [startSignaling] to have been called first.
|
||||||
|
*
|
||||||
|
* @return 0 on success, -1 on error
|
||||||
|
*/
|
||||||
|
fun placeCall(targetFingerprint: String): Int {
|
||||||
|
check(nativeHandle != 0L) { "Engine not initialized" }
|
||||||
|
return nativePlaceCall(nativeHandle, targetFingerprint)
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Answer an incoming direct call.
|
||||||
|
*
|
||||||
|
* @param callId The call ID from the incoming call (available in stats.incoming_call_id)
|
||||||
|
* @param mode 0=Reject, 1=AcceptTrusted (P2P in Phase 2), 2=AcceptGeneric (relay-mediated)
|
||||||
|
* @return 0 on success, -1 on error
|
||||||
|
*/
|
||||||
|
fun answerCall(callId: String, mode: Int = 2): Int {
|
||||||
|
check(nativeHandle != 0L) { "Engine not initialized" }
|
||||||
|
return nativeAnswerCall(nativeHandle, callId, mode)
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Integer constants matching the Rust [CallState] enum ordinals. */
|
/** Integer constants matching the Rust [CallState] enum ordinals. */
|
||||||
|
|||||||
12
android/app/src/main/java/com/wzp/net/RelayPinger.kt
Normal file
12
android/app/src/main/java/com/wzp/net/RelayPinger.kt
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
package com.wzp.net
|
||||||
|
|
||||||
|
// Relay pinging is now done via WzpEngine.pingRelay() (instance method).
|
||||||
|
// This file kept for the data class only.
|
||||||
|
|
||||||
|
object RelayPinger {
|
||||||
|
data class PingResult(
|
||||||
|
val rttMs: Int,
|
||||||
|
val reachable: Boolean,
|
||||||
|
val serverFingerprint: String = "",
|
||||||
|
)
|
||||||
|
}
|
||||||
@@ -12,6 +12,7 @@ import com.wzp.engine.CallStats
|
|||||||
import com.wzp.service.CallService
|
import com.wzp.service.CallService
|
||||||
import com.wzp.engine.WzpCallback
|
import com.wzp.engine.WzpCallback
|
||||||
import com.wzp.engine.WzpEngine
|
import com.wzp.engine.WzpEngine
|
||||||
|
import kotlinx.coroutines.Dispatchers
|
||||||
import kotlinx.coroutines.Job
|
import kotlinx.coroutines.Job
|
||||||
import kotlinx.coroutines.delay
|
import kotlinx.coroutines.delay
|
||||||
import kotlinx.coroutines.flow.MutableStateFlow
|
import kotlinx.coroutines.flow.MutableStateFlow
|
||||||
@@ -19,6 +20,8 @@ import kotlinx.coroutines.flow.StateFlow
|
|||||||
import kotlinx.coroutines.flow.asStateFlow
|
import kotlinx.coroutines.flow.asStateFlow
|
||||||
import kotlinx.coroutines.isActive
|
import kotlinx.coroutines.isActive
|
||||||
import kotlinx.coroutines.launch
|
import kotlinx.coroutines.launch
|
||||||
|
import kotlinx.coroutines.withContext
|
||||||
|
import org.json.JSONObject
|
||||||
import java.io.File
|
import java.io.File
|
||||||
import java.net.Inet4Address
|
import java.net.Inet4Address
|
||||||
import java.net.Inet6Address
|
import java.net.Inet6Address
|
||||||
@@ -26,6 +29,14 @@ import java.net.InetAddress
|
|||||||
|
|
||||||
data class ServerEntry(val address: String, val label: String)
|
data class ServerEntry(val address: String, val label: String)
|
||||||
|
|
||||||
|
data class PingResult(
|
||||||
|
val rttMs: Int,
|
||||||
|
val serverFingerprint: String = "",
|
||||||
|
val reachable: Boolean = rttMs > 0,
|
||||||
|
)
|
||||||
|
|
||||||
|
enum class LockStatus { UNKNOWN, OFFLINE, NEW, VERIFIED, CHANGED }
|
||||||
|
|
||||||
class CallViewModel : ViewModel(), WzpCallback {
|
class CallViewModel : ViewModel(), WzpCallback {
|
||||||
|
|
||||||
private var engine: WzpEngine? = null
|
private var engine: WzpEngine? = null
|
||||||
@@ -70,6 +81,16 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
private val _preferIPv6 = MutableStateFlow(false)
|
private val _preferIPv6 = MutableStateFlow(false)
|
||||||
val preferIPv6: StateFlow<Boolean> = _preferIPv6.asStateFlow()
|
val preferIPv6: StateFlow<Boolean> = _preferIPv6.asStateFlow()
|
||||||
|
|
||||||
|
private val _recentRooms = MutableStateFlow<List<com.wzp.data.SettingsRepository.RecentRoom>>(emptyList())
|
||||||
|
val recentRooms: StateFlow<List<com.wzp.data.SettingsRepository.RecentRoom>> = _recentRooms.asStateFlow()
|
||||||
|
|
||||||
|
/** Ping results keyed by server address. */
|
||||||
|
private val _pingResults = MutableStateFlow<Map<String, PingResult>>(emptyMap())
|
||||||
|
val pingResults: StateFlow<Map<String, PingResult>> = _pingResults.asStateFlow()
|
||||||
|
|
||||||
|
/** Known server fingerprints (TOFU). */
|
||||||
|
private val _knownFingerprints = MutableStateFlow<Map<String, String>>(emptyMap())
|
||||||
|
|
||||||
private val _playoutGainDb = MutableStateFlow(0f)
|
private val _playoutGainDb = MutableStateFlow(0f)
|
||||||
val playoutGainDb: StateFlow<Float> = _playoutGainDb.asStateFlow()
|
val playoutGainDb: StateFlow<Float> = _playoutGainDb.asStateFlow()
|
||||||
|
|
||||||
@@ -85,6 +106,18 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
private val _aecEnabled = MutableStateFlow(true)
|
private val _aecEnabled = MutableStateFlow(true)
|
||||||
val aecEnabled: StateFlow<Boolean> = _aecEnabled.asStateFlow()
|
val aecEnabled: StateFlow<Boolean> = _aecEnabled.asStateFlow()
|
||||||
|
|
||||||
|
private val _debugRecording = MutableStateFlow(false)
|
||||||
|
val debugRecording: StateFlow<Boolean> = _debugRecording.asStateFlow()
|
||||||
|
|
||||||
|
// Quality profile index (matches JNI bridge profile_from_int)
|
||||||
|
private val _codecChoice = MutableStateFlow(0)
|
||||||
|
val codecChoice: StateFlow<Int> = _codecChoice.asStateFlow()
|
||||||
|
|
||||||
|
/** Key-change warning dialog state. */
|
||||||
|
data class KeyWarningInfo(val address: String, val oldFp: String, val newFp: String)
|
||||||
|
private val _keyWarning = MutableStateFlow<KeyWarningInfo?>(null)
|
||||||
|
val keyWarning: StateFlow<KeyWarningInfo?> = _keyWarning.asStateFlow()
|
||||||
|
|
||||||
/** True when a call just ended and debug report can be sent. */
|
/** True when a call just ended and debug report can be sent. */
|
||||||
private val _debugReportAvailable = MutableStateFlow(false)
|
private val _debugReportAvailable = MutableStateFlow(false)
|
||||||
val debugReportAvailable: StateFlow<Boolean> = _debugReportAvailable.asStateFlow()
|
val debugReportAvailable: StateFlow<Boolean> = _debugReportAvailable.asStateFlow()
|
||||||
@@ -99,13 +132,143 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
|
|
||||||
private var statsJob: Job? = null
|
private var statsJob: Job? = null
|
||||||
|
|
||||||
|
// ── Direct calling state ──
|
||||||
|
/** 0=room mode, 1=direct call mode */
|
||||||
|
private val _callMode = MutableStateFlow(0)
|
||||||
|
val callMode: StateFlow<Int> = _callMode.asStateFlow()
|
||||||
|
|
||||||
|
/** Target fingerprint for direct call */
|
||||||
|
private val _targetFingerprint = MutableStateFlow("")
|
||||||
|
val targetFingerprint: StateFlow<String> = _targetFingerprint.asStateFlow()
|
||||||
|
|
||||||
|
/** Signal state string: "idle", "registered", "ringing", "incoming", "setup" */
|
||||||
|
private val _signalState = MutableStateFlow("idle")
|
||||||
|
val signalState: StateFlow<String> = _signalState.asStateFlow()
|
||||||
|
|
||||||
|
/** Incoming call info */
|
||||||
|
private val _incomingCallId = MutableStateFlow<String?>(null)
|
||||||
|
val incomingCallId: StateFlow<String?> = _incomingCallId.asStateFlow()
|
||||||
|
|
||||||
|
private val _incomingCallerFp = MutableStateFlow<String?>(null)
|
||||||
|
val incomingCallerFp: StateFlow<String?> = _incomingCallerFp.asStateFlow()
|
||||||
|
|
||||||
|
private val _incomingCallerAlias = MutableStateFlow<String?>(null)
|
||||||
|
val incomingCallerAlias: StateFlow<String?> = _incomingCallerAlias.asStateFlow()
|
||||||
|
|
||||||
|
/** Separate signal manager (persistent, survives calls) */
|
||||||
|
private var signalManager: com.wzp.engine.SignalManager? = null
|
||||||
|
private var signalPollJob: Job? = null
|
||||||
|
|
||||||
|
fun setCallMode(mode: Int) { _callMode.value = mode }
|
||||||
|
fun setTargetFingerprint(fp: String) { _targetFingerprint.value = fp }
|
||||||
|
|
||||||
|
/** Register on relay for direct calls */
|
||||||
|
fun registerForCalls() {
|
||||||
|
val serverIdx = _selectedServer.value
|
||||||
|
val serverList = _servers.value
|
||||||
|
if (serverIdx >= serverList.size) return
|
||||||
|
|
||||||
|
val relay = serverList[serverIdx].address
|
||||||
|
var seed = _seedHex.value
|
||||||
|
// Generate seed if empty (fresh install or cleared storage)
|
||||||
|
if (seed.isEmpty()) {
|
||||||
|
val newSeed = ByteArray(32).also { java.security.SecureRandom().nextBytes(it) }
|
||||||
|
seed = newSeed.joinToString("") { "%02x".format(it) }
|
||||||
|
_seedHex.value = seed
|
||||||
|
settings?.saveSeedHex(seed)
|
||||||
|
Log.i(TAG, "generated new identity seed")
|
||||||
|
}
|
||||||
|
val resolvedRelay = resolveToIp(relay) ?: relay
|
||||||
|
|
||||||
|
// nativeSignalConnect has JNI overhead — must be on a thread with enough stack.
|
||||||
|
// Dispatchers.IO threads overflow. Use explicit Java Thread.
|
||||||
|
Thread(null, {
|
||||||
|
try {
|
||||||
|
val mgr = com.wzp.engine.SignalManager()
|
||||||
|
val ok = mgr.connect(resolvedRelay, seed)
|
||||||
|
viewModelScope.launch {
|
||||||
|
if (ok) {
|
||||||
|
signalManager = mgr
|
||||||
|
startSignalPolling()
|
||||||
|
} else {
|
||||||
|
_errorMessage.value = "Failed to register on relay"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (e: Exception) {
|
||||||
|
viewModelScope.launch {
|
||||||
|
_errorMessage.value = "Register error: ${e.message}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, "wzp-signal-init", 8 * 1024 * 1024).start()
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Poll signal manager state every 500ms */
|
||||||
|
private fun startSignalPolling() {
|
||||||
|
signalPollJob?.cancel()
|
||||||
|
signalPollJob = viewModelScope.launch {
|
||||||
|
while (isActive) {
|
||||||
|
val mgr = signalManager
|
||||||
|
if (mgr != null && mgr.isConnected) {
|
||||||
|
val state = mgr.getState()
|
||||||
|
_signalState.value = state.status
|
||||||
|
_incomingCallId.value = state.incomingCallId
|
||||||
|
_incomingCallerFp.value = state.incomingCallerFp
|
||||||
|
_incomingCallerAlias.value = state.incomingCallerAlias
|
||||||
|
|
||||||
|
// Auto-connect to media room when call is set up
|
||||||
|
if (state.status == "setup" && state.callSetupRelay != null && state.callSetupRoom != null) {
|
||||||
|
Log.i(TAG, "CallSetup: connecting to ${state.callSetupRelay} room ${state.callSetupRoom}")
|
||||||
|
startCallInternal(state.callSetupRelay, state.callSetupRoom)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
delay(500L)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun stopSignalPolling() {
|
||||||
|
signalPollJob?.cancel()
|
||||||
|
signalPollJob = null
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Place a direct call to the target fingerprint */
|
||||||
|
fun placeDirectCall() {
|
||||||
|
val target = _targetFingerprint.value.trim()
|
||||||
|
if (target.isEmpty()) {
|
||||||
|
_errorMessage.value = "Enter a fingerprint to call"
|
||||||
|
return
|
||||||
|
}
|
||||||
|
signalManager?.placeCall(target)
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Answer an incoming direct call */
|
||||||
|
fun answerIncomingCall(mode: Int = 2) {
|
||||||
|
val callId = _incomingCallId.value ?: return
|
||||||
|
signalManager?.answerCall(callId, mode)
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Reject an incoming direct call */
|
||||||
|
fun rejectIncomingCall() {
|
||||||
|
val callId = _incomingCallId.value ?: return
|
||||||
|
signalManager?.answerCall(callId, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Hang up direct call — media ends, signal stays alive */
|
||||||
|
fun hangupDirectCall() {
|
||||||
|
signalManager?.hangup()
|
||||||
|
engine?.stopCall()
|
||||||
|
engine?.destroy()
|
||||||
|
engine = null
|
||||||
|
engineInitialized = false
|
||||||
|
}
|
||||||
|
|
||||||
companion object {
|
companion object {
|
||||||
private const val TAG = "WzpCall"
|
private const val TAG = "WzpCall"
|
||||||
val DEFAULT_SERVERS = listOf(
|
val DEFAULT_SERVERS = listOf(
|
||||||
ServerEntry("172.16.81.175:4433", "LAN (172.16.81.175)"),
|
ServerEntry("172.16.81.175:4433", "LAN (172.16.81.175)"),
|
||||||
ServerEntry("193.180.213.68:4433", "Pangolin (IP)"),
|
ServerEntry("193.180.213.68:4433", "Pangolin (IP)"),
|
||||||
)
|
)
|
||||||
const val DEFAULT_ROOM = "android"
|
const val DEFAULT_ROOM = "general"
|
||||||
}
|
}
|
||||||
|
|
||||||
fun setContext(context: Context) {
|
fun setContext(context: Context) {
|
||||||
@@ -139,6 +302,9 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
_captureGainDb.value = s.loadCaptureGain()
|
_captureGainDb.value = s.loadCaptureGain()
|
||||||
_seedHex.value = s.getOrCreateSeedHex()
|
_seedHex.value = s.getOrCreateSeedHex()
|
||||||
_aecEnabled.value = s.loadAecEnabled()
|
_aecEnabled.value = s.loadAecEnabled()
|
||||||
|
_debugRecording.value = s.loadDebugRecording()
|
||||||
|
_codecChoice.value = s.loadCodecChoice()
|
||||||
|
_recentRooms.value = s.loadRecentRooms()
|
||||||
}
|
}
|
||||||
|
|
||||||
fun selectServer(index: Int) {
|
fun selectServer(index: Int) {
|
||||||
@@ -182,6 +348,70 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
settings?.saveSelectedServer(_selectedServer.value)
|
settings?.saveSelectedServer(_selectedServer.value)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ping all servers via native QUIC. Requires engine to be initialized.
|
||||||
|
* Creates engine if needed, pings, keeps engine alive for subsequent Connect.
|
||||||
|
*/
|
||||||
|
fun pingAllServers() {
|
||||||
|
viewModelScope.launch {
|
||||||
|
// Ensure engine exists
|
||||||
|
if (engine == null || engine?.isInitialized != true) {
|
||||||
|
try {
|
||||||
|
engine = WzpEngine(this@CallViewModel).also { it.init() }
|
||||||
|
engineInitialized = true
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "engine init for ping failed: $e")
|
||||||
|
return@launch
|
||||||
|
}
|
||||||
|
}
|
||||||
|
val eng = engine ?: return@launch
|
||||||
|
|
||||||
|
val results = mutableMapOf<String, PingResult>()
|
||||||
|
val known = mutableMapOf<String, String>()
|
||||||
|
_servers.value.forEach { server ->
|
||||||
|
val json = withContext(Dispatchers.IO) {
|
||||||
|
eng.pingRelay(server.address)
|
||||||
|
}
|
||||||
|
if (json != null) {
|
||||||
|
try {
|
||||||
|
val obj = JSONObject(json)
|
||||||
|
val rtt = obj.getInt("rtt_ms")
|
||||||
|
val fp = obj.optString("server_fingerprint", "")
|
||||||
|
results[server.address] = PingResult(rttMs = rtt, serverFingerprint = fp)
|
||||||
|
// TOFU
|
||||||
|
if (fp.isNotEmpty()) {
|
||||||
|
val saved = settings?.loadServerFingerprint(server.address)
|
||||||
|
if (saved == null) settings?.saveServerFingerprint(server.address, fp)
|
||||||
|
known[server.address] = saved ?: fp
|
||||||
|
}
|
||||||
|
} catch (_: Exception) {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_pingResults.value = results
|
||||||
|
_knownFingerprints.value = known
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Load saved TOFU fingerprints. */
|
||||||
|
fun loadSavedFingerprints() {
|
||||||
|
val known = mutableMapOf<String, String>()
|
||||||
|
_servers.value.forEach { server ->
|
||||||
|
settings?.loadServerFingerprint(server.address)?.let {
|
||||||
|
known[server.address] = it
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_knownFingerprints.value = known
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Get lock status for a server. */
|
||||||
|
fun lockStatus(address: String): LockStatus {
|
||||||
|
val pr = _pingResults.value[address] ?: return LockStatus.UNKNOWN
|
||||||
|
if (!pr.reachable) return LockStatus.OFFLINE
|
||||||
|
val known = _knownFingerprints.value[address] ?: return LockStatus.NEW
|
||||||
|
if (pr.serverFingerprint.isEmpty()) return LockStatus.NEW
|
||||||
|
return if (pr.serverFingerprint == known) LockStatus.VERIFIED else LockStatus.CHANGED
|
||||||
|
}
|
||||||
|
|
||||||
fun setRoomName(name: String) {
|
fun setRoomName(name: String) {
|
||||||
_roomName.value = name
|
_roomName.value = name
|
||||||
settings?.saveRoom(name)
|
settings?.saveRoom(name)
|
||||||
@@ -214,6 +444,16 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
settings?.saveAecEnabled(enabled)
|
settings?.saveAecEnabled(enabled)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fun setDebugRecording(enabled: Boolean) {
|
||||||
|
_debugRecording.value = enabled
|
||||||
|
settings?.saveDebugRecording(enabled)
|
||||||
|
}
|
||||||
|
|
||||||
|
fun setCodecChoice(choice: Int) {
|
||||||
|
_codecChoice.value = choice
|
||||||
|
settings?.saveCodecChoice(choice)
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Resolve DNS hostname to IP address on the Kotlin/Android side,
|
* Resolve DNS hostname to IP address on the Kotlin/Android side,
|
||||||
* since Rust's DNS resolution may not work on Android.
|
* since Rust's DNS resolution may not work on Android.
|
||||||
@@ -254,8 +494,17 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
Log.i(TAG, "teardown: stopping audio, stopService=$stopService")
|
Log.i(TAG, "teardown: stopping audio, stopService=$stopService")
|
||||||
val hadCall = audioStarted
|
val hadCall = audioStarted
|
||||||
CallService.onStopFromNotification = null
|
CallService.onStopFromNotification = null
|
||||||
stopAudio()
|
stopAudio() // sets running=false (non-blocking)
|
||||||
stopStatsPolling()
|
stopStatsPolling()
|
||||||
|
|
||||||
|
// Wait for audio threads to exit their loops before destroying the engine.
|
||||||
|
// This guarantees no in-flight JNI calls to writeAudio/readAudio.
|
||||||
|
val drained = audioPipeline?.awaitDrain() ?: true
|
||||||
|
if (!drained) {
|
||||||
|
Log.w(TAG, "teardown: audio threads did not drain in time")
|
||||||
|
}
|
||||||
|
audioPipeline = null
|
||||||
|
|
||||||
Log.i(TAG, "teardown: stopping engine")
|
Log.i(TAG, "teardown: stopping engine")
|
||||||
try { engine?.stopCall() } catch (e: Exception) { Log.w(TAG, "stopCall err: $e") }
|
try { engine?.stopCall() } catch (e: Exception) { Log.w(TAG, "stopCall err: $e") }
|
||||||
try { engine?.destroy() } catch (e: Exception) { Log.w(TAG, "destroy err: $e") }
|
try { engine?.destroy() } catch (e: Exception) { Log.w(TAG, "destroy err: $e") }
|
||||||
@@ -271,13 +520,82 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
Log.i(TAG, "teardown: done")
|
Log.i(TAG, "teardown: done")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Accept the new server key and proceed with the call. */
|
||||||
|
fun acceptNewFingerprint() {
|
||||||
|
val info = _keyWarning.value ?: return
|
||||||
|
_knownFingerprints.value = _knownFingerprints.value.toMutableMap().also {
|
||||||
|
it[info.address] = info.newFp
|
||||||
|
}
|
||||||
|
settings?.saveServerFingerprint(info.address, info.newFp)
|
||||||
|
_keyWarning.value = null
|
||||||
|
startCallInternal()
|
||||||
|
}
|
||||||
|
|
||||||
|
fun dismissKeyWarning() {
|
||||||
|
_keyWarning.value = null
|
||||||
|
}
|
||||||
|
|
||||||
fun startCall() {
|
fun startCall() {
|
||||||
|
val serverEntry = _servers.value[_selectedServer.value]
|
||||||
|
// Check for key change before connecting
|
||||||
|
val ls = lockStatus(serverEntry.address)
|
||||||
|
if (ls == LockStatus.CHANGED) {
|
||||||
|
val known = _knownFingerprints.value[serverEntry.address] ?: ""
|
||||||
|
val current = _pingResults.value[serverEntry.address]?.serverFingerprint ?: ""
|
||||||
|
_keyWarning.value = KeyWarningInfo(serverEntry.address, known, current)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
startCallInternal()
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Start a call to a specific relay + room (used by direct call setup). */
|
||||||
|
private fun startCallInternal(relay: String, room: String) {
|
||||||
|
Log.i(TAG, "startCallDirect: relay=$relay room=$room")
|
||||||
|
try {
|
||||||
|
// Don't teardown — keep the signal connection alive
|
||||||
|
engine = WzpEngine(this)
|
||||||
|
engine!!.init()
|
||||||
|
engineInitialized = true
|
||||||
|
_callState.value = 1
|
||||||
|
_errorMessage.value = null
|
||||||
|
try { appContext?.let { CallService.start(it) } } catch (e: Exception) {
|
||||||
|
Log.w(TAG, "service start err: $e")
|
||||||
|
}
|
||||||
|
startStatsPolling()
|
||||||
|
viewModelScope.launch(kotlinx.coroutines.Dispatchers.IO) {
|
||||||
|
try {
|
||||||
|
val seed = _seedHex.value
|
||||||
|
val name = _alias.value
|
||||||
|
val result = engine?.startCall(relay, room, seedHex = seed, alias = name, profile = _codecChoice.value) ?: -1
|
||||||
|
CallService.onStopFromNotification = { stopCall() }
|
||||||
|
if (result != 0) {
|
||||||
|
_callState.value = 0
|
||||||
|
_errorMessage.value = "Failed to connect to call room (code $result)"
|
||||||
|
appContext?.let { CallService.stop(it) }
|
||||||
|
}
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.e(TAG, "startCallDirect error", e)
|
||||||
|
_callState.value = 0
|
||||||
|
_errorMessage.value = "Engine error: ${e.message}"
|
||||||
|
appContext?.let { CallService.stop(it) }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.e(TAG, "startCallDirect error", e)
|
||||||
|
_callState.value = 0
|
||||||
|
_errorMessage.value = "Engine error: ${e.message}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun startCallInternal() {
|
||||||
val serverEntry = _servers.value[_selectedServer.value]
|
val serverEntry = _servers.value[_selectedServer.value]
|
||||||
val room = _roomName.value
|
val room = _roomName.value
|
||||||
Log.i(TAG, "startCall: server=${serverEntry.address} room=$room")
|
Log.i(TAG, "startCall: server=${serverEntry.address} room=$room")
|
||||||
_debugReportAvailable.value = false
|
_debugReportAvailable.value = false
|
||||||
_debugReportStatus.value = null
|
_debugReportStatus.value = null
|
||||||
lastCallServer = serverEntry.address
|
lastCallServer = serverEntry.address
|
||||||
|
settings?.addRecentRoom(serverEntry.address, room)
|
||||||
|
_recentRooms.value = settings?.loadRecentRooms() ?: emptyList()
|
||||||
debugReporter?.prepareForCall()
|
debugReporter?.prepareForCall()
|
||||||
try {
|
try {
|
||||||
// Teardown previous call but don't stop the service (we're about to restart it)
|
// Teardown previous call but don't stop the service (we're about to restart it)
|
||||||
@@ -300,7 +618,7 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
val seed = _seedHex.value
|
val seed = _seedHex.value
|
||||||
val name = _alias.value
|
val name = _alias.value
|
||||||
Log.i(TAG, "startCall: resolved=$relay, alias=$name, calling engine.startCall")
|
Log.i(TAG, "startCall: resolved=$relay, alias=$name, calling engine.startCall")
|
||||||
val result = engine?.startCall(relay, room, seedHex = seed, alias = name) ?: -1
|
val result = engine?.startCall(relay, room, seedHex = seed, alias = name, profile = _codecChoice.value) ?: -1
|
||||||
Log.i(TAG, "startCall: engine returned $result")
|
Log.i(TAG, "startCall: engine returned $result")
|
||||||
// Only wire up notification callback after engine is running
|
// Only wire up notification callback after engine is running
|
||||||
CallService.onStopFromNotification = { stopCall() }
|
CallService.onStopFromNotification = { stopCall() }
|
||||||
@@ -391,6 +709,7 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
it.playoutGainDb = _playoutGainDb.value
|
it.playoutGainDb = _playoutGainDb.value
|
||||||
it.captureGainDb = _captureGainDb.value
|
it.captureGainDb = _captureGainDb.value
|
||||||
it.aecEnabled = _aecEnabled.value
|
it.aecEnabled = _aecEnabled.value
|
||||||
|
it.debugRecording = _debugRecording.value
|
||||||
it.start(e)
|
it.start(e)
|
||||||
}
|
}
|
||||||
audioRouteManager?.register()
|
audioRouteManager?.register()
|
||||||
@@ -399,8 +718,7 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
|
|
||||||
private fun stopAudio() {
|
private fun stopAudio() {
|
||||||
if (!audioStarted) return
|
if (!audioStarted) return
|
||||||
audioPipeline?.stop()
|
audioPipeline?.stop() // sets running=false; DON'T null — teardown needs awaitDrain()
|
||||||
audioPipeline = null
|
|
||||||
audioRouteManager?.unregister()
|
audioRouteManager?.unregister()
|
||||||
audioRouteManager?.setSpeaker(false)
|
audioRouteManager?.setSpeaker(false)
|
||||||
_isSpeaker.value = false
|
_isSpeaker.value = false
|
||||||
@@ -419,6 +737,7 @@ class CallViewModel : ViewModel(), WzpCallback {
|
|||||||
val s = CallStats.fromJson(json)
|
val s = CallStats.fromJson(json)
|
||||||
lastCallDuration = s.durationSecs
|
lastCallDuration = s.durationSecs
|
||||||
_stats.value = s
|
_stats.value = s
|
||||||
|
// Only update callState from media engine stats (not signal)
|
||||||
if (s.state != 0) {
|
if (s.state != 0) {
|
||||||
_callState.value = s.state
|
_callState.value = s.state
|
||||||
}
|
}
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
141
android/app/src/main/java/com/wzp/ui/components/Identicon.kt
Normal file
141
android/app/src/main/java/com/wzp/ui/components/Identicon.kt
Normal file
@@ -0,0 +1,141 @@
|
|||||||
|
package com.wzp.ui.components
|
||||||
|
|
||||||
|
import android.widget.Toast
|
||||||
|
import androidx.compose.foundation.Canvas
|
||||||
|
import androidx.compose.foundation.clickable
|
||||||
|
import androidx.compose.foundation.layout.size
|
||||||
|
import androidx.compose.foundation.shape.RoundedCornerShape
|
||||||
|
import androidx.compose.runtime.Composable
|
||||||
|
import androidx.compose.ui.Modifier
|
||||||
|
import androidx.compose.ui.draw.clip
|
||||||
|
import androidx.compose.ui.geometry.Offset
|
||||||
|
import androidx.compose.ui.geometry.Size
|
||||||
|
import androidx.compose.ui.graphics.Color
|
||||||
|
import androidx.compose.ui.platform.LocalClipboardManager
|
||||||
|
import androidx.compose.ui.platform.LocalContext
|
||||||
|
import androidx.compose.ui.text.AnnotatedString
|
||||||
|
import androidx.compose.ui.unit.Dp
|
||||||
|
import androidx.compose.ui.unit.dp
|
||||||
|
import kotlin.math.min
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Deterministic identicon — generates a unique 5x5 symmetric pattern
|
||||||
|
* from a hex fingerprint string. Identical algorithm to the desktop
|
||||||
|
* TypeScript implementation in identicon.ts.
|
||||||
|
*/
|
||||||
|
@Composable
|
||||||
|
fun Identicon(
|
||||||
|
fingerprint: String,
|
||||||
|
size: Dp = 36.dp,
|
||||||
|
clickToCopy: Boolean = true,
|
||||||
|
modifier: Modifier = Modifier,
|
||||||
|
) {
|
||||||
|
val clipboard = LocalClipboardManager.current
|
||||||
|
val context = LocalContext.current
|
||||||
|
val bytes = hashBytes(fingerprint)
|
||||||
|
val (bg, fg) = deriveColors(bytes)
|
||||||
|
val grid = buildGrid(bytes)
|
||||||
|
|
||||||
|
Canvas(
|
||||||
|
modifier = modifier
|
||||||
|
.size(size)
|
||||||
|
.clip(RoundedCornerShape(size * 0.12f))
|
||||||
|
.then(
|
||||||
|
if (clickToCopy && fingerprint.isNotEmpty()) {
|
||||||
|
Modifier.clickable {
|
||||||
|
clipboard.setText(AnnotatedString(fingerprint))
|
||||||
|
Toast.makeText(context, "Copied", Toast.LENGTH_SHORT).show()
|
||||||
|
}
|
||||||
|
} else Modifier
|
||||||
|
)
|
||||||
|
) {
|
||||||
|
val cellW = this.size.width / 5f
|
||||||
|
val cellH = this.size.height / 5f
|
||||||
|
|
||||||
|
// Background
|
||||||
|
drawRect(color = bg, size = this.size)
|
||||||
|
|
||||||
|
// Foreground cells
|
||||||
|
for (y in 0 until 5) {
|
||||||
|
for (x in 0 until 5) {
|
||||||
|
if (grid[y][x]) {
|
||||||
|
drawRect(
|
||||||
|
color = fg,
|
||||||
|
topLeft = Offset(x * cellW, y * cellH),
|
||||||
|
size = Size(cellW, cellH),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fingerprint text that copies to clipboard on tap.
|
||||||
|
*/
|
||||||
|
@Composable
|
||||||
|
fun CopyableFingerprint(
|
||||||
|
fingerprint: String,
|
||||||
|
modifier: Modifier = Modifier,
|
||||||
|
style: androidx.compose.ui.text.TextStyle = androidx.compose.material3.MaterialTheme.typography.bodySmall,
|
||||||
|
color: Color = Color.Unspecified,
|
||||||
|
) {
|
||||||
|
val clipboard = LocalClipboardManager.current
|
||||||
|
val context = LocalContext.current
|
||||||
|
|
||||||
|
androidx.compose.material3.Text(
|
||||||
|
text = fingerprint,
|
||||||
|
style = style,
|
||||||
|
color = color,
|
||||||
|
modifier = modifier.clickable {
|
||||||
|
if (fingerprint.isNotEmpty()) {
|
||||||
|
clipboard.setText(AnnotatedString(fingerprint))
|
||||||
|
Toast.makeText(context, "Fingerprint copied", Toast.LENGTH_SHORT).show()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Internal helpers (matching desktop identicon.ts) ---
|
||||||
|
|
||||||
|
private fun hashBytes(hex: String): List<Int> {
|
||||||
|
val clean = hex.filter { it.isLetterOrDigit() }
|
||||||
|
val bytes = mutableListOf<Int>()
|
||||||
|
var i = 0
|
||||||
|
while (i + 1 < clean.length) {
|
||||||
|
val b = clean.substring(i, i + 2).toIntOrNull(16) ?: 0
|
||||||
|
bytes.add(b)
|
||||||
|
i += 2
|
||||||
|
}
|
||||||
|
// Pad to at least 16 bytes
|
||||||
|
while (bytes.size < 16) bytes.add(0)
|
||||||
|
return bytes
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun deriveColors(bytes: List<Int>): Pair<Color, Color> {
|
||||||
|
val hue1 = bytes[0] * 360f / 256f
|
||||||
|
val hue2 = (bytes[1] * 360f / 256f + 120f) % 360f
|
||||||
|
val bg = hslToColor(hue1, 0.65f, 0.35f)
|
||||||
|
val fg = hslToColor(hue2, 0.70f, 0.55f)
|
||||||
|
return bg to fg
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun buildGrid(bytes: List<Int>): List<List<Boolean>> {
|
||||||
|
return (0 until 5).map { y ->
|
||||||
|
val left = (0 until 3).map { x ->
|
||||||
|
val idx = 2 + y * 3 + x
|
||||||
|
bytes[idx % bytes.size] > 128
|
||||||
|
}
|
||||||
|
// Mirror: col3 = col1, col4 = col0
|
||||||
|
listOf(left[0], left[1], left[2], left[1], left[0])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun hslToColor(h: Float, s: Float, l: Float): Color {
|
||||||
|
val k = { n: Float -> (n + h / 30f) % 12f }
|
||||||
|
val a = s * min(l, 1f - l)
|
||||||
|
val f = { n: Float ->
|
||||||
|
l - a * maxOf(-1f, minOf(k(n) - 3f, minOf(9f - k(n), 1f)))
|
||||||
|
}
|
||||||
|
return Color(f(0f), f(8f), f(4f))
|
||||||
|
}
|
||||||
@@ -1,5 +1,6 @@
|
|||||||
package com.wzp.ui.settings
|
package com.wzp.ui.settings
|
||||||
|
|
||||||
|
import androidx.compose.foundation.clickable
|
||||||
import android.content.ClipData
|
import android.content.ClipData
|
||||||
import android.content.ClipboardManager
|
import android.content.ClipboardManager
|
||||||
import android.content.Context
|
import android.content.Context
|
||||||
@@ -22,6 +23,7 @@ import androidx.compose.material3.AlertDialog
|
|||||||
import androidx.compose.material3.Button
|
import androidx.compose.material3.Button
|
||||||
import androidx.compose.material3.ButtonDefaults
|
import androidx.compose.material3.ButtonDefaults
|
||||||
import androidx.compose.material3.Divider
|
import androidx.compose.material3.Divider
|
||||||
|
import androidx.compose.material3.RadioButton
|
||||||
import androidx.compose.material3.FilledTonalButton
|
import androidx.compose.material3.FilledTonalButton
|
||||||
import androidx.compose.material3.FilledTonalIconButton
|
import androidx.compose.material3.FilledTonalIconButton
|
||||||
import androidx.compose.material3.IconButtonDefaults
|
import androidx.compose.material3.IconButtonDefaults
|
||||||
@@ -158,20 +160,30 @@ fun SettingsScreen(
|
|||||||
|
|
||||||
Spacer(modifier = Modifier.height(16.dp))
|
Spacer(modifier = Modifier.height(16.dp))
|
||||||
|
|
||||||
// Fingerprint display
|
// Fingerprint display with identicon
|
||||||
val fingerprint = if (draftSeedHex.length >= 16) draftSeedHex.take(16).uppercase() else "Not generated"
|
val fingerprint = if (draftSeedHex.length >= 16) draftSeedHex.take(16).uppercase() else "Not generated"
|
||||||
Text(
|
Text(
|
||||||
text = "Fingerprint",
|
text = "Fingerprint",
|
||||||
style = MaterialTheme.typography.labelSmall,
|
style = MaterialTheme.typography.labelSmall,
|
||||||
color = MaterialTheme.colorScheme.onSurfaceVariant
|
color = MaterialTheme.colorScheme.onSurfaceVariant
|
||||||
)
|
)
|
||||||
Text(
|
Row(
|
||||||
text = fingerprint.chunked(4).joinToString(" "),
|
verticalAlignment = Alignment.CenterVertically,
|
||||||
style = MaterialTheme.typography.bodyMedium.copy(
|
modifier = Modifier.padding(vertical = 4.dp)
|
||||||
fontFamily = FontFamily.Monospace
|
) {
|
||||||
),
|
com.wzp.ui.components.Identicon(
|
||||||
color = MaterialTheme.colorScheme.onSurface
|
fingerprint = draftSeedHex,
|
||||||
)
|
size = 40.dp,
|
||||||
|
)
|
||||||
|
Spacer(modifier = Modifier.width(12.dp))
|
||||||
|
com.wzp.ui.components.CopyableFingerprint(
|
||||||
|
fingerprint = fingerprint.chunked(4).joinToString(" "),
|
||||||
|
style = MaterialTheme.typography.bodyMedium.copy(
|
||||||
|
fontFamily = FontFamily.Monospace
|
||||||
|
),
|
||||||
|
color = MaterialTheme.colorScheme.onSurface,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
Spacer(modifier = Modifier.height(12.dp))
|
Spacer(modifier = Modifier.height(12.dp))
|
||||||
|
|
||||||
@@ -231,6 +243,51 @@ fun SettingsScreen(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Spacer(modifier = Modifier.height(12.dp))
|
||||||
|
|
||||||
|
// Quality selection — slider from best (studio 64k) to worst (codec2 1.2k) + auto
|
||||||
|
val qualityLabels = listOf(
|
||||||
|
"Studio 64k", "Studio 48k", "Studio 32k", "Auto",
|
||||||
|
"Opus 24k", "Opus 6k", "Codec2 3.2k", "Codec2 1.2k"
|
||||||
|
)
|
||||||
|
// Map slider position to JNI profile int:
|
||||||
|
// 0=Studio64k(6), 1=Studio48k(5), 2=Studio32k(4), 3=Auto(7),
|
||||||
|
// 4=Opus24k(0), 5=Opus6k(1), 6=Codec2_3.2k(3), 7=Codec2_1.2k(2)
|
||||||
|
val sliderToProfile = intArrayOf(6, 5, 4, 7, 0, 1, 3, 2)
|
||||||
|
val profileToSlider = mapOf(6 to 0, 5 to 1, 4 to 2, 7 to 3, 0 to 4, 1 to 5, 3 to 6, 2 to 7)
|
||||||
|
val qualityColors = listOf(
|
||||||
|
Color(0xFF22C55E), Color(0xFF4ADE80), Color(0xFF86EFAC), Color(0xFFA3E635),
|
||||||
|
Color(0xFFA3E635), Color(0xFFFACC15), Color(0xFFE97320), Color(0xFF991B1B)
|
||||||
|
)
|
||||||
|
val currentCodec by viewModel.codecChoice.collectAsState()
|
||||||
|
val sliderPos = profileToSlider[currentCodec] ?: 3
|
||||||
|
Text("Quality", style = MaterialTheme.typography.bodyMedium)
|
||||||
|
Text(
|
||||||
|
text = "Decode always accepts all codecs",
|
||||||
|
style = MaterialTheme.typography.bodySmall,
|
||||||
|
color = MaterialTheme.colorScheme.onSurfaceVariant
|
||||||
|
)
|
||||||
|
Spacer(modifier = Modifier.height(4.dp))
|
||||||
|
Text(
|
||||||
|
text = qualityLabels[sliderPos],
|
||||||
|
style = MaterialTheme.typography.titleMedium.copy(fontWeight = FontWeight.Bold),
|
||||||
|
color = qualityColors[sliderPos]
|
||||||
|
)
|
||||||
|
Slider(
|
||||||
|
value = sliderPos.toFloat(),
|
||||||
|
onValueChange = { viewModel.setCodecChoice(sliderToProfile[it.toInt()]) },
|
||||||
|
valueRange = 0f..7f,
|
||||||
|
steps = 6,
|
||||||
|
modifier = Modifier.fillMaxWidth()
|
||||||
|
)
|
||||||
|
Row(
|
||||||
|
modifier = Modifier.fillMaxWidth(),
|
||||||
|
horizontalArrangement = Arrangement.SpaceBetween
|
||||||
|
) {
|
||||||
|
Text("Best", style = MaterialTheme.typography.labelSmall, color = Color(0xFF22C55E))
|
||||||
|
Text("Lowest", style = MaterialTheme.typography.labelSmall, color = Color(0xFF991B1B))
|
||||||
|
}
|
||||||
|
|
||||||
Spacer(modifier = Modifier.height(24.dp))
|
Spacer(modifier = Modifier.height(24.dp))
|
||||||
Divider()
|
Divider()
|
||||||
Spacer(modifier = Modifier.height(16.dp))
|
Spacer(modifier = Modifier.height(16.dp))
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ wzp-crypto = { workspace = true }
|
|||||||
wzp-transport = { workspace = true }
|
wzp-transport = { workspace = true }
|
||||||
tokio = { workspace = true }
|
tokio = { workspace = true }
|
||||||
tracing = { workspace = true }
|
tracing = { workspace = true }
|
||||||
tracing-subscriber = { workspace = true }
|
tracing-subscriber = { workspace = true, features = ["env-filter"] }
|
||||||
bytes = { workspace = true }
|
bytes = { workspace = true }
|
||||||
serde = { workspace = true }
|
serde = { workspace = true }
|
||||||
serde_json = "1"
|
serde_json = "1"
|
||||||
|
|||||||
@@ -1,91 +1,128 @@
|
|||||||
//! Lock-free SPSC ring buffers for audio PCM transfer between
|
//! Lock-free SPSC ring buffer — "Reader-Detects-Lap" architecture.
|
||||||
//! Kotlin AudioRecord/AudioTrack threads and the Rust engine.
|
|
||||||
//!
|
//!
|
||||||
//! These use a simple spin-free design: the producer writes and advances
|
//! SPSC invariant: the producer ONLY writes `write_pos`, the consumer
|
||||||
//! a write cursor, the consumer reads and advances a read cursor.
|
//! ONLY writes `read_pos`. Neither thread touches the other's cursor.
|
||||||
//! Both cursors are atomic so no mutex is needed.
|
//!
|
||||||
|
//! On overflow (writer laps the reader), the writer simply overwrites
|
||||||
|
//! old buffer data. The reader detects the lap via `available() >
|
||||||
|
//! RING_CAPACITY` and snaps its own `read_pos` forward.
|
||||||
|
//!
|
||||||
|
//! Capacity is a power of 2 for bitmask indexing (no modulo).
|
||||||
|
|
||||||
use std::sync::atomic::{AtomicUsize, Ordering};
|
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
|
||||||
|
|
||||||
/// Ring buffer capacity in i16 samples.
|
/// Ring buffer capacity — power of 2 for bitmask indexing.
|
||||||
/// 960 samples * 10 frames = ~200ms of audio at 48kHz mono.
|
/// 16384 samples = 341.3ms at 48kHz mono. 70% more headroom
|
||||||
const RING_CAPACITY: usize = 960 * 10;
|
/// than the previous 9600 (200ms) for surviving Android GC pauses.
|
||||||
|
const RING_CAPACITY: usize = 16384; // 2^14
|
||||||
|
const RING_MASK: usize = RING_CAPACITY - 1;
|
||||||
|
|
||||||
/// Lock-free single-producer single-consumer ring buffer for i16 PCM samples.
|
/// Lock-free single-producer single-consumer ring buffer for i16 PCM samples.
|
||||||
pub struct AudioRing {
|
pub struct AudioRing {
|
||||||
buf: Box<[i16; RING_CAPACITY]>,
|
buf: Box<[i16]>,
|
||||||
|
/// Monotonically increasing write cursor. ONLY written by producer.
|
||||||
write_pos: AtomicUsize,
|
write_pos: AtomicUsize,
|
||||||
|
/// Monotonically increasing read cursor. ONLY written by consumer.
|
||||||
read_pos: AtomicUsize,
|
read_pos: AtomicUsize,
|
||||||
|
/// Incremented by reader when it detects it was lapped (overflow).
|
||||||
|
overflow_count: AtomicU64,
|
||||||
|
/// Incremented by reader when ring is empty (underrun).
|
||||||
|
underrun_count: AtomicU64,
|
||||||
}
|
}
|
||||||
|
|
||||||
// SAFETY: AudioRing is designed for SPSC — one thread writes, one reads.
|
// SAFETY: AudioRing is SPSC — one thread writes (producer), one reads (consumer).
|
||||||
// The atomics ensure visibility. The buffer itself is never accessed
|
// The producer only writes write_pos. The consumer only writes read_pos.
|
||||||
// from the same index by both threads simultaneously because the
|
// Neither thread writes the other's cursor. Buffer indices are derived from
|
||||||
// producer only writes to positions between write_pos and read_pos,
|
// the owning thread's cursor, ensuring no concurrent access to the same index.
|
||||||
// and the consumer only reads from positions between read_pos and write_pos.
|
|
||||||
unsafe impl Send for AudioRing {}
|
unsafe impl Send for AudioRing {}
|
||||||
unsafe impl Sync for AudioRing {}
|
unsafe impl Sync for AudioRing {}
|
||||||
|
|
||||||
impl AudioRing {
|
impl AudioRing {
|
||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
|
debug_assert!(RING_CAPACITY.is_power_of_two());
|
||||||
Self {
|
Self {
|
||||||
buf: Box::new([0i16; RING_CAPACITY]),
|
buf: vec![0i16; RING_CAPACITY].into_boxed_slice(),
|
||||||
write_pos: AtomicUsize::new(0),
|
write_pos: AtomicUsize::new(0),
|
||||||
read_pos: AtomicUsize::new(0),
|
read_pos: AtomicUsize::new(0),
|
||||||
|
overflow_count: AtomicU64::new(0),
|
||||||
|
underrun_count: AtomicU64::new(0),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Number of samples available to read.
|
/// Number of samples available to read (clamped to capacity).
|
||||||
pub fn available(&self) -> usize {
|
pub fn available(&self) -> usize {
|
||||||
let w = self.write_pos.load(Ordering::Acquire);
|
let w = self.write_pos.load(Ordering::Acquire);
|
||||||
let r = self.read_pos.load(Ordering::Acquire);
|
let r = self.read_pos.load(Ordering::Relaxed);
|
||||||
w.wrapping_sub(r)
|
w.wrapping_sub(r).min(RING_CAPACITY)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Number of samples that can be written without overwriting.
|
/// Number of samples that can be written without overwriting unread data.
|
||||||
pub fn free_space(&self) -> usize {
|
pub fn free_space(&self) -> usize {
|
||||||
RING_CAPACITY - self.available()
|
RING_CAPACITY.saturating_sub(self.available())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Write samples into the ring. Returns number of samples written.
|
/// Write samples into the ring. Returns number of samples written.
|
||||||
/// Drops oldest samples if the ring is full.
|
///
|
||||||
|
/// If the ring is full, old data is silently overwritten. The reader
|
||||||
|
/// will detect the lap and self-correct. The writer NEVER touches
|
||||||
|
/// `read_pos` — this is the key invariant that prevents cursor desync.
|
||||||
pub fn write(&self, samples: &[i16]) -> usize {
|
pub fn write(&self, samples: &[i16]) -> usize {
|
||||||
let w = self.write_pos.load(Ordering::Relaxed);
|
|
||||||
let count = samples.len().min(RING_CAPACITY);
|
let count = samples.len().min(RING_CAPACITY);
|
||||||
|
let w = self.write_pos.load(Ordering::Relaxed);
|
||||||
|
|
||||||
for i in 0..count {
|
for i in 0..count {
|
||||||
let idx = (w + i) % RING_CAPACITY;
|
|
||||||
// SAFETY: We're the only writer, and the reader won't read
|
|
||||||
// past read_pos which we haven't advanced past yet.
|
|
||||||
unsafe {
|
unsafe {
|
||||||
let ptr = self.buf.as_ptr() as *mut i16;
|
let ptr = self.buf.as_ptr() as *mut i16;
|
||||||
*ptr.add(idx) = samples[i];
|
*ptr.add((w + i) & RING_MASK) = samples[i];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
self.write_pos.store(w.wrapping_add(count), Ordering::Release);
|
self.write_pos.store(w.wrapping_add(count), Ordering::Release);
|
||||||
|
|
||||||
// If we overwrote unread data, advance read_pos
|
|
||||||
if self.available() > RING_CAPACITY {
|
|
||||||
let new_read = self.write_pos.load(Ordering::Relaxed).wrapping_sub(RING_CAPACITY);
|
|
||||||
self.read_pos.store(new_read, Ordering::Release);
|
|
||||||
}
|
|
||||||
|
|
||||||
count
|
count
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Read samples from the ring into `out`. Returns number of samples read.
|
/// Read samples from the ring into `out`. Returns number of samples read.
|
||||||
|
///
|
||||||
|
/// If the writer has lapped the reader (overflow), `read_pos` is snapped
|
||||||
|
/// forward to the oldest valid data. This is safe because only the
|
||||||
|
/// reader thread writes `read_pos`.
|
||||||
pub fn read(&self, out: &mut [i16]) -> usize {
|
pub fn read(&self, out: &mut [i16]) -> usize {
|
||||||
let avail = self.available();
|
let w = self.write_pos.load(Ordering::Acquire);
|
||||||
let count = out.len().min(avail);
|
let mut r = self.read_pos.load(Ordering::Relaxed);
|
||||||
|
|
||||||
|
let mut avail = w.wrapping_sub(r);
|
||||||
|
|
||||||
|
// Lap detection: writer has overwritten our unread data.
|
||||||
|
// Snap read_pos forward to oldest valid data in the buffer.
|
||||||
|
if avail > RING_CAPACITY {
|
||||||
|
r = w.wrapping_sub(RING_CAPACITY);
|
||||||
|
avail = RING_CAPACITY;
|
||||||
|
self.overflow_count.fetch_add(1, Ordering::Relaxed);
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = out.len().min(avail);
|
||||||
|
if count == 0 {
|
||||||
|
if w == r {
|
||||||
|
self.underrun_count.fetch_add(1, Ordering::Relaxed);
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
let r = self.read_pos.load(Ordering::Relaxed);
|
|
||||||
for i in 0..count {
|
for i in 0..count {
|
||||||
let idx = (r + i) % RING_CAPACITY;
|
out[i] = unsafe { *self.buf.as_ptr().add((r + i) & RING_MASK) };
|
||||||
out[i] = unsafe { *self.buf.as_ptr().add(idx) };
|
|
||||||
}
|
}
|
||||||
|
|
||||||
self.read_pos.store(r.wrapping_add(count), Ordering::Release);
|
self.read_pos.store(r.wrapping_add(count), Ordering::Release);
|
||||||
count
|
count
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Number of overflow events (reader was lapped by writer).
|
||||||
|
pub fn overflow_count(&self) -> u64 {
|
||||||
|
self.overflow_count.load(Ordering::Relaxed)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Number of underrun events (reader found empty buffer).
|
||||||
|
pub fn underrun_count(&self) -> u64 {
|
||||||
|
self.underrun_count.load(Ordering::Relaxed)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,4 +12,13 @@ pub enum EngineCommand {
|
|||||||
ForceProfile(QualityProfile),
|
ForceProfile(QualityProfile),
|
||||||
/// Stop the call and shut down the engine.
|
/// Stop the call and shut down the engine.
|
||||||
Stop,
|
Stop,
|
||||||
|
/// Place a direct call to a fingerprint (requires signal connection).
|
||||||
|
PlaceCall { target_fingerprint: String },
|
||||||
|
/// Answer an incoming direct call.
|
||||||
|
AnswerCall {
|
||||||
|
call_id: String,
|
||||||
|
accept_mode: wzp_proto::CallAcceptMode,
|
||||||
|
},
|
||||||
|
/// Reject an incoming direct call.
|
||||||
|
RejectCall { call_id: String },
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,32 +9,58 @@
|
|||||||
//! and AudioTrack. PCM samples are transferred through lock-free ring buffers.
|
//! and AudioTrack. PCM samples are transferred through lock-free ring buffers.
|
||||||
|
|
||||||
use std::net::SocketAddr;
|
use std::net::SocketAddr;
|
||||||
use std::sync::atomic::{AtomicBool, AtomicU16, AtomicU32, Ordering};
|
use std::sync::atomic::{AtomicBool, AtomicU8, AtomicU16, AtomicU32, Ordering};
|
||||||
use std::sync::{Arc, Mutex};
|
use std::sync::{Arc, Mutex};
|
||||||
use std::time::Instant;
|
use std::time::Instant;
|
||||||
|
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use tracing::{error, info, warn};
|
use tracing::{error, info, warn};
|
||||||
use wzp_codec::agc::AutoGainControl;
|
use wzp_codec::agc::AutoGainControl;
|
||||||
use wzp_codec::opus_dec::OpusDecoder;
|
|
||||||
use wzp_codec::opus_enc::OpusEncoder;
|
|
||||||
use wzp_crypto::{KeyExchange, WarzoneKeyExchange};
|
use wzp_crypto::{KeyExchange, WarzoneKeyExchange};
|
||||||
use wzp_fec::{RaptorQFecDecoder, RaptorQFecEncoder};
|
use wzp_fec::{RaptorQFecDecoder, RaptorQFecEncoder};
|
||||||
use wzp_proto::{
|
use wzp_proto::{
|
||||||
AudioDecoder, AudioEncoder, CodecId, FecDecoder, FecEncoder,
|
AdaptiveQualityController, AudioDecoder, AudioEncoder, CodecId, FecDecoder, FecEncoder,
|
||||||
MediaHeader, MediaPacket, MediaTransport, QualityProfile, SignalMessage,
|
MediaHeader, MediaPacket, MediaTransport, QualityController, QualityProfile, SignalMessage,
|
||||||
};
|
};
|
||||||
|
|
||||||
use crate::audio_ring::AudioRing;
|
use crate::audio_ring::AudioRing;
|
||||||
use crate::commands::EngineCommand;
|
use crate::commands::EngineCommand;
|
||||||
use crate::stats::{CallState, CallStats};
|
use crate::stats::{CallState, CallStats};
|
||||||
|
|
||||||
/// Opus frame size at 48kHz mono, 20ms = 960 samples.
|
/// Max frame size at 48kHz mono (40ms = 1920 samples, for Codec2/Opus6k).
|
||||||
const FRAME_SAMPLES: usize = 960;
|
const MAX_FRAME_SAMPLES: usize = 1920;
|
||||||
|
|
||||||
|
/// Sentinel value: no profile change pending.
|
||||||
|
const PROFILE_NO_CHANGE: u8 = 0xFF;
|
||||||
|
|
||||||
|
/// All quality profiles in index order, for AtomicU8-based signaling.
|
||||||
|
const PROFILES: [QualityProfile; 6] = [
|
||||||
|
QualityProfile::STUDIO_64K, // 0
|
||||||
|
QualityProfile::STUDIO_48K, // 1
|
||||||
|
QualityProfile::STUDIO_32K, // 2
|
||||||
|
QualityProfile::GOOD, // 3
|
||||||
|
QualityProfile::DEGRADED, // 4
|
||||||
|
QualityProfile::CATASTROPHIC, // 5
|
||||||
|
];
|
||||||
|
|
||||||
|
fn profile_to_index(p: &QualityProfile) -> u8 {
|
||||||
|
PROFILES.iter().position(|pp| pp.codec == p.codec).map(|i| i as u8).unwrap_or(3)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn index_to_profile(idx: u8) -> Option<QualityProfile> {
|
||||||
|
PROFILES.get(idx as usize).copied()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Compute frame samples at 48kHz for a given profile.
|
||||||
|
fn frame_samples_for(profile: &QualityProfile) -> usize {
|
||||||
|
(profile.frame_duration_ms as usize) * 48 // 48000 / 1000
|
||||||
|
}
|
||||||
|
|
||||||
/// Configuration to start a call.
|
/// Configuration to start a call.
|
||||||
pub struct CallStartConfig {
|
pub struct CallStartConfig {
|
||||||
pub profile: QualityProfile,
|
pub profile: QualityProfile,
|
||||||
|
/// When true, use the relay's chosen_profile from CallAnswer instead of local profile.
|
||||||
|
pub auto_profile: bool,
|
||||||
pub relay_addr: String,
|
pub relay_addr: String,
|
||||||
pub room: String,
|
pub room: String,
|
||||||
pub auth_token: Vec<u8>,
|
pub auth_token: Vec<u8>,
|
||||||
@@ -46,6 +72,7 @@ impl Default for CallStartConfig {
|
|||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
profile: QualityProfile::GOOD,
|
profile: QualityProfile::GOOD,
|
||||||
|
auto_profile: false,
|
||||||
relay_addr: String::new(),
|
relay_addr: String::new(),
|
||||||
room: String::new(),
|
room: String::new(),
|
||||||
auth_token: Vec::new(),
|
auth_token: Vec::new(),
|
||||||
@@ -123,6 +150,7 @@ impl WzpEngine {
|
|||||||
let room = config.room.clone();
|
let room = config.room.clone();
|
||||||
let identity_seed = config.identity_seed;
|
let identity_seed = config.identity_seed;
|
||||||
let profile = config.profile;
|
let profile = config.profile;
|
||||||
|
let auto_profile = config.auto_profile;
|
||||||
let alias = config.alias.clone();
|
let alias = config.alias.clone();
|
||||||
let state = self.state.clone();
|
let state = self.state.clone();
|
||||||
|
|
||||||
@@ -131,7 +159,7 @@ impl WzpEngine {
|
|||||||
|
|
||||||
let state_clone = state.clone();
|
let state_clone = state.clone();
|
||||||
runtime.block_on(async move {
|
runtime.block_on(async move {
|
||||||
if let Err(e) = run_call(relay_addr, &room, &identity_seed, profile, alias.as_deref(), state_clone).await
|
if let Err(e) = run_call(relay_addr, &room, &identity_seed, profile, auto_profile, alias.as_deref(), state_clone).await
|
||||||
{
|
{
|
||||||
error!("call failed: {e}");
|
error!("call failed: {e}");
|
||||||
}
|
}
|
||||||
@@ -169,6 +197,55 @@ impl WzpEngine {
|
|||||||
info!("stop_call: done");
|
info!("stop_call: done");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Ping a relay — same pattern as start_call (creates runtime on calling thread).
|
||||||
|
/// Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or error.
|
||||||
|
pub fn ping_relay(&self, address: &str) -> Result<String, anyhow::Error> {
|
||||||
|
let addr: SocketAddr = address.parse()?;
|
||||||
|
|
||||||
|
let rt = tokio::runtime::Builder::new_current_thread()
|
||||||
|
.enable_all()
|
||||||
|
.build()?;
|
||||||
|
|
||||||
|
let result = rt.block_on(async {
|
||||||
|
let bind: SocketAddr = "0.0.0.0:0".parse().unwrap();
|
||||||
|
let endpoint = wzp_transport::create_endpoint(bind, None)?;
|
||||||
|
let client_cfg = wzp_transport::client_config();
|
||||||
|
let start = Instant::now();
|
||||||
|
|
||||||
|
let conn_result = tokio::time::timeout(
|
||||||
|
std::time::Duration::from_secs(3),
|
||||||
|
wzp_transport::connect(&endpoint, addr, "ping", client_cfg),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
// Always close endpoint to prevent resource leaks
|
||||||
|
endpoint.close(0u32.into(), b"done");
|
||||||
|
|
||||||
|
let conn = conn_result.map_err(|_| anyhow::anyhow!("timeout"))??;
|
||||||
|
let rtt_ms = start.elapsed().as_millis() as u64;
|
||||||
|
let server_fp = conn
|
||||||
|
.peer_identity()
|
||||||
|
.and_then(|id| id.downcast::<Vec<rustls::pki_types::CertificateDer>>().ok())
|
||||||
|
.and_then(|certs| certs.first().map(|c| {
|
||||||
|
use std::hash::{Hash, Hasher};
|
||||||
|
let mut h = std::collections::hash_map::DefaultHasher::new();
|
||||||
|
c.as_ref().hash(&mut h);
|
||||||
|
format!("{:016x}", h.finish())
|
||||||
|
}))
|
||||||
|
.unwrap_or_default();
|
||||||
|
conn.close(0u32.into(), b"ping");
|
||||||
|
|
||||||
|
Ok::<_, anyhow::Error>(format!(r#"{{"rtt_ms":{},"server_fingerprint":"{}"}}"#, rtt_ms, server_fp))
|
||||||
|
});
|
||||||
|
|
||||||
|
// Shutdown runtime cleanly with timeout
|
||||||
|
rt.shutdown_timeout(std::time::Duration::from_millis(500));
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Start persistent signaling connection for direct calls.
|
||||||
|
// Signal methods (start_signaling, place_call, answer_call) moved to signal_mgr.rs
|
||||||
|
|
||||||
pub fn set_mute(&self, muted: bool) {
|
pub fn set_mute(&self, muted: bool) {
|
||||||
self.state.muted.store(muted, Ordering::Relaxed);
|
self.state.muted.store(muted, Ordering::Relaxed);
|
||||||
}
|
}
|
||||||
@@ -183,6 +260,9 @@ impl WzpEngine {
|
|||||||
stats.duration_secs = start.elapsed().as_secs_f64();
|
stats.duration_secs = start.elapsed().as_secs_f64();
|
||||||
}
|
}
|
||||||
stats.audio_level = self.state.audio_level_rms.load(Ordering::Relaxed);
|
stats.audio_level = self.state.audio_level_rms.load(Ordering::Relaxed);
|
||||||
|
stats.playout_overflows = self.state.playout_ring.overflow_count();
|
||||||
|
stats.playout_underruns = self.state.playout_ring.underrun_count();
|
||||||
|
stats.capture_overflows = self.state.capture_ring.overflow_count();
|
||||||
stats
|
stats
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -224,10 +304,10 @@ async fn run_call(
|
|||||||
room: &str,
|
room: &str,
|
||||||
identity_seed: &[u8; 32],
|
identity_seed: &[u8; 32],
|
||||||
profile: QualityProfile,
|
profile: QualityProfile,
|
||||||
|
auto_profile: bool,
|
||||||
alias: Option<&str>,
|
alias: Option<&str>,
|
||||||
state: Arc<EngineState>,
|
state: Arc<EngineState>,
|
||||||
) -> Result<(), anyhow::Error> {
|
) -> Result<(), anyhow::Error> {
|
||||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
|
||||||
|
|
||||||
let bind_addr: SocketAddr = "0.0.0.0:0".parse().unwrap();
|
let bind_addr: SocketAddr = "0.0.0.0:0".parse().unwrap();
|
||||||
let endpoint = wzp_transport::create_endpoint(bind_addr, None)?;
|
let endpoint = wzp_transport::create_endpoint(bind_addr, None)?;
|
||||||
@@ -258,6 +338,9 @@ async fn run_call(
|
|||||||
ephemeral_pub,
|
ephemeral_pub,
|
||||||
signature,
|
signature,
|
||||||
supported_profiles: vec![
|
supported_profiles: vec![
|
||||||
|
QualityProfile::STUDIO_64K,
|
||||||
|
QualityProfile::STUDIO_48K,
|
||||||
|
QualityProfile::STUDIO_32K,
|
||||||
QualityProfile::GOOD,
|
QualityProfile::GOOD,
|
||||||
QualityProfile::DEGRADED,
|
QualityProfile::DEGRADED,
|
||||||
QualityProfile::CATASTROPHIC,
|
QualityProfile::CATASTROPHIC,
|
||||||
@@ -272,8 +355,8 @@ async fn run_call(
|
|||||||
.await?
|
.await?
|
||||||
.ok_or_else(|| anyhow::anyhow!("connection closed before CallAnswer"))?;
|
.ok_or_else(|| anyhow::anyhow!("connection closed before CallAnswer"))?;
|
||||||
|
|
||||||
let relay_ephemeral_pub = match answer {
|
let (relay_ephemeral_pub, chosen_profile) = match answer {
|
||||||
SignalMessage::CallAnswer { ephemeral_pub, .. } => ephemeral_pub,
|
SignalMessage::CallAnswer { ephemeral_pub, chosen_profile, .. } => (ephemeral_pub, chosen_profile),
|
||||||
other => {
|
other => {
|
||||||
return Err(anyhow::anyhow!(
|
return Err(anyhow::anyhow!(
|
||||||
"expected CallAnswer, got {:?}",
|
"expected CallAnswer, got {:?}",
|
||||||
@@ -282,19 +365,25 @@ async fn run_call(
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Auto mode: use the relay's chosen profile instead of the local preference
|
||||||
|
let profile = if auto_profile {
|
||||||
|
info!(chosen = ?chosen_profile.codec, "auto mode: using relay's chosen profile");
|
||||||
|
chosen_profile
|
||||||
|
} else {
|
||||||
|
profile
|
||||||
|
};
|
||||||
|
|
||||||
let _session = kx.derive_session(&relay_ephemeral_pub)?;
|
let _session = kx.derive_session(&relay_ephemeral_pub)?;
|
||||||
info!("handshake complete, call active");
|
info!(codec = ?profile.codec, "handshake complete, call active");
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut stats = state.stats.lock().unwrap();
|
let mut stats = state.stats.lock().unwrap();
|
||||||
stats.state = CallState::Active;
|
stats.state = CallState::Active;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize Opus codec
|
// Initialize codec (Opus or Codec2 based on profile)
|
||||||
let mut encoder =
|
let mut encoder = wzp_codec::create_encoder(profile);
|
||||||
OpusEncoder::new(profile).map_err(|e| anyhow::anyhow!("opus encoder init: {e}"))?;
|
let mut decoder = wzp_codec::create_decoder(profile);
|
||||||
let mut decoder =
|
|
||||||
OpusDecoder::new(profile).map_err(|e| anyhow::anyhow!("opus decoder init: {e}"))?;
|
|
||||||
|
|
||||||
// Initialize FEC encoder/decoder
|
// Initialize FEC encoder/decoder
|
||||||
let mut fec_enc = wzp_fec::create_encoder(&profile);
|
let mut fec_enc = wzp_fec::create_encoder(&profile);
|
||||||
@@ -304,21 +393,37 @@ async fn run_call(
|
|||||||
let mut capture_agc = AutoGainControl::new();
|
let mut capture_agc = AutoGainControl::new();
|
||||||
let mut playout_agc = AutoGainControl::new();
|
let mut playout_agc = AutoGainControl::new();
|
||||||
|
|
||||||
|
let mut frame_samples = frame_samples_for(&profile);
|
||||||
info!(
|
info!(
|
||||||
|
codec = ?profile.codec,
|
||||||
fec_ratio = profile.fec_ratio,
|
fec_ratio = profile.fec_ratio,
|
||||||
frames_per_block = profile.frames_per_block,
|
frames_per_block = profile.frames_per_block,
|
||||||
"codec + FEC + AGC initialized (48kHz mono, 20ms frames)"
|
frame_ms = profile.frame_duration_ms,
|
||||||
|
frame_samples,
|
||||||
|
"codec + FEC + AGC initialized"
|
||||||
);
|
);
|
||||||
|
|
||||||
|
{
|
||||||
|
let mut stats = state.stats.lock().unwrap();
|
||||||
|
stats.current_codec = format!("{:?}", profile.codec);
|
||||||
|
stats.auto_mode = auto_profile;
|
||||||
|
}
|
||||||
|
|
||||||
let seq = AtomicU16::new(0);
|
let seq = AtomicU16::new(0);
|
||||||
let ts = AtomicU32::new(0);
|
let ts = AtomicU32::new(0);
|
||||||
let transport_recv = transport.clone();
|
let transport_recv = transport.clone();
|
||||||
|
|
||||||
// Pre-allocate buffers
|
// Adaptive quality: shared AtomicU8 between recv task (writer) and send task (reader).
|
||||||
let mut capture_buf = vec![0i16; FRAME_SAMPLES];
|
// 0xFF = no change pending, 0-5 = index into PROFILES array.
|
||||||
|
let pending_profile = Arc::new(AtomicU8::new(PROFILE_NO_CHANGE));
|
||||||
|
let pending_profile_recv = pending_profile.clone();
|
||||||
|
|
||||||
|
// Pre-allocate buffers (sized for current profile)
|
||||||
|
let mut capture_buf = vec![0i16; frame_samples];
|
||||||
let mut encode_buf = vec![0u8; encoder.max_frame_bytes()];
|
let mut encode_buf = vec![0u8; encoder.max_frame_bytes()];
|
||||||
let mut frame_in_block: u8 = 0;
|
let mut frame_in_block: u8 = 0;
|
||||||
let mut block_id: u8 = 0;
|
let mut block_id: u8 = 0;
|
||||||
|
let mut current_profile = profile;
|
||||||
|
|
||||||
// Send task: capture ring → Opus encode → FEC → MediaPackets
|
// Send task: capture ring → Opus encode → FEC → MediaPackets
|
||||||
//
|
//
|
||||||
@@ -333,19 +438,58 @@ async fn run_call(
|
|||||||
let mut last_stats_log = Instant::now();
|
let mut last_stats_log = Instant::now();
|
||||||
let mut frames_sent: u64 = 0;
|
let mut frames_sent: u64 = 0;
|
||||||
let mut frames_dropped: u64 = 0;
|
let mut frames_dropped: u64 = 0;
|
||||||
|
// Per-step timing accumulators (reset every stats log)
|
||||||
|
let mut t_agc_us: u64 = 0;
|
||||||
|
let mut t_opus_us: u64 = 0;
|
||||||
|
let mut t_fec_us: u64 = 0;
|
||||||
|
let mut t_send_us: u64 = 0;
|
||||||
|
let mut t_frames: u64 = 0;
|
||||||
loop {
|
loop {
|
||||||
if !state.running.load(Ordering::Relaxed) {
|
if !state.running.load(Ordering::Relaxed) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check for adaptive profile switch from recv task
|
||||||
|
if auto_profile {
|
||||||
|
let p = pending_profile.swap(PROFILE_NO_CHANGE, Ordering::Acquire);
|
||||||
|
if p != PROFILE_NO_CHANGE {
|
||||||
|
if let Some(new_profile) = index_to_profile(p) {
|
||||||
|
info!(
|
||||||
|
from = ?current_profile.codec,
|
||||||
|
to = ?new_profile.codec,
|
||||||
|
"auto: switching encoder profile"
|
||||||
|
);
|
||||||
|
if let Err(e) = encoder.set_profile(new_profile) {
|
||||||
|
warn!("encoder set_profile failed: {e}");
|
||||||
|
} else {
|
||||||
|
fec_enc = wzp_fec::create_encoder(&new_profile);
|
||||||
|
current_profile = new_profile;
|
||||||
|
let new_frame_samples = frame_samples_for(&new_profile);
|
||||||
|
if new_frame_samples != frame_samples {
|
||||||
|
frame_samples = new_frame_samples;
|
||||||
|
capture_buf.resize(frame_samples, 0);
|
||||||
|
}
|
||||||
|
encode_buf.resize(encoder.max_frame_bytes(), 0);
|
||||||
|
// Reset FEC block state for clean switch
|
||||||
|
frame_in_block = 0;
|
||||||
|
block_id = block_id.wrapping_add(1);
|
||||||
|
// Update stats with new codec
|
||||||
|
if let Ok(mut stats) = state.stats.lock() {
|
||||||
|
stats.current_codec = format!("{:?}", new_profile.codec);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let avail = state.capture_ring.available();
|
let avail = state.capture_ring.available();
|
||||||
if avail < FRAME_SAMPLES {
|
if avail < frame_samples {
|
||||||
tokio::time::sleep(std::time::Duration::from_millis(5)).await;
|
tokio::time::sleep(std::time::Duration::from_millis(5)).await;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
let read = state.capture_ring.read(&mut capture_buf);
|
let read = state.capture_ring.read(&mut capture_buf);
|
||||||
if read < FRAME_SAMPLES {
|
if read < frame_samples {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -356,9 +500,12 @@ async fn run_call(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AGC: normalize capture volume before encoding
|
// AGC: normalize capture volume before encoding
|
||||||
|
let t0 = Instant::now();
|
||||||
capture_agc.process_frame(&mut capture_buf);
|
capture_agc.process_frame(&mut capture_buf);
|
||||||
|
t_agc_us += t0.elapsed().as_micros() as u64;
|
||||||
|
|
||||||
// Opus encode
|
// Opus encode
|
||||||
|
let t0 = Instant::now();
|
||||||
let encoded_len = match encoder.encode(&capture_buf, &mut encode_buf) {
|
let encoded_len = match encoder.encode(&capture_buf, &mut encode_buf) {
|
||||||
Ok(n) => n,
|
Ok(n) => n,
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
@@ -366,19 +513,20 @@ async fn run_call(
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
t_opus_us += t0.elapsed().as_micros() as u64;
|
||||||
let encoded = &encode_buf[..encoded_len];
|
let encoded = &encode_buf[..encoded_len];
|
||||||
|
|
||||||
// Build source packet
|
// Build source packet
|
||||||
let s = seq.fetch_add(1, Ordering::Relaxed);
|
let s = seq.fetch_add(1, Ordering::Relaxed);
|
||||||
let t = ts.fetch_add(FRAME_SAMPLES as u32, Ordering::Relaxed);
|
let t = ts.fetch_add(frame_samples as u32, Ordering::Relaxed);
|
||||||
|
|
||||||
let source_pkt = MediaPacket {
|
let source_pkt = MediaPacket {
|
||||||
header: MediaHeader {
|
header: MediaHeader {
|
||||||
version: 0,
|
version: 0,
|
||||||
is_repair: false,
|
is_repair: false,
|
||||||
codec_id: profile.codec,
|
codec_id: current_profile.codec,
|
||||||
has_quality_report: false,
|
has_quality_report: false,
|
||||||
fec_ratio_encoded: MediaHeader::encode_fec_ratio(profile.fec_ratio),
|
fec_ratio_encoded: MediaHeader::encode_fec_ratio(current_profile.fec_ratio),
|
||||||
seq: s,
|
seq: s,
|
||||||
timestamp: t,
|
timestamp: t,
|
||||||
fec_block: block_id,
|
fec_block: block_id,
|
||||||
@@ -391,6 +539,7 @@ async fn run_call(
|
|||||||
};
|
};
|
||||||
|
|
||||||
// Send source packet — drop on error, never break
|
// Send source packet — drop on error, never break
|
||||||
|
let t0 = Instant::now();
|
||||||
if let Err(e) = transport.send_media(&source_pkt).await {
|
if let Err(e) = transport.send_media(&source_pkt).await {
|
||||||
send_errors += 1;
|
send_errors += 1;
|
||||||
frames_dropped += 1;
|
frames_dropped += 1;
|
||||||
@@ -405,19 +554,22 @@ async fn run_call(
|
|||||||
last_send_error_log = Instant::now();
|
last_send_error_log = Instant::now();
|
||||||
}
|
}
|
||||||
// Don't feed to FEC either — the source is lost
|
// Don't feed to FEC either — the source is lost
|
||||||
|
t_send_us += t0.elapsed().as_micros() as u64;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
t_send_us += t0.elapsed().as_micros() as u64;
|
||||||
frames_sent += 1;
|
frames_sent += 1;
|
||||||
|
|
||||||
// Feed encoded frame to FEC encoder
|
// Feed encoded frame to FEC encoder
|
||||||
|
let t0 = Instant::now();
|
||||||
if let Err(e) = fec_enc.add_source_symbol(encoded) {
|
if let Err(e) = fec_enc.add_source_symbol(encoded) {
|
||||||
warn!("fec add_source error: {e}");
|
warn!("fec add_source error: {e}");
|
||||||
}
|
}
|
||||||
frame_in_block += 1;
|
frame_in_block += 1;
|
||||||
|
|
||||||
// When block is full, generate repair packets
|
// When block is full, generate repair packets
|
||||||
if frame_in_block >= profile.frames_per_block {
|
if frame_in_block >= current_profile.frames_per_block {
|
||||||
match fec_enc.generate_repair(profile.fec_ratio) {
|
match fec_enc.generate_repair(current_profile.fec_ratio) {
|
||||||
Ok(repairs) => {
|
Ok(repairs) => {
|
||||||
let repair_count = repairs.len();
|
let repair_count = repairs.len();
|
||||||
for (sym_idx, repair_data) in repairs {
|
for (sym_idx, repair_data) in repairs {
|
||||||
@@ -426,10 +578,10 @@ async fn run_call(
|
|||||||
header: MediaHeader {
|
header: MediaHeader {
|
||||||
version: 0,
|
version: 0,
|
||||||
is_repair: true,
|
is_repair: true,
|
||||||
codec_id: profile.codec,
|
codec_id: current_profile.codec,
|
||||||
has_quality_report: false,
|
has_quality_report: false,
|
||||||
fec_ratio_encoded: MediaHeader::encode_fec_ratio(
|
fec_ratio_encoded: MediaHeader::encode_fec_ratio(
|
||||||
profile.fec_ratio,
|
current_profile.fec_ratio,
|
||||||
),
|
),
|
||||||
seq: rs,
|
seq: rs,
|
||||||
timestamp: t,
|
timestamp: t,
|
||||||
@@ -452,7 +604,7 @@ async fn run_call(
|
|||||||
info!(
|
info!(
|
||||||
block_id,
|
block_id,
|
||||||
repair_count,
|
repair_count,
|
||||||
fec_ratio = profile.fec_ratio,
|
fec_ratio = current_profile.fec_ratio,
|
||||||
"FEC block complete"
|
"FEC block complete"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -466,9 +618,12 @@ async fn run_call(
|
|||||||
block_id = block_id.wrapping_add(1);
|
block_id = block_id.wrapping_add(1);
|
||||||
frame_in_block = 0;
|
frame_in_block = 0;
|
||||||
}
|
}
|
||||||
|
t_fec_us += t0.elapsed().as_micros() as u64;
|
||||||
|
t_frames += 1;
|
||||||
|
|
||||||
// Periodic stats every 5 seconds
|
// Periodic stats every 5 seconds
|
||||||
if last_stats_log.elapsed().as_secs() >= 5 {
|
if last_stats_log.elapsed().as_secs() >= 5 {
|
||||||
|
let avg = |total: u64| if t_frames > 0 { total / t_frames } else { 0 };
|
||||||
info!(
|
info!(
|
||||||
seq = s,
|
seq = s,
|
||||||
block_id,
|
block_id,
|
||||||
@@ -476,16 +631,23 @@ async fn run_call(
|
|||||||
frames_dropped,
|
frames_dropped,
|
||||||
send_errors,
|
send_errors,
|
||||||
ring_avail = state.capture_ring.available(),
|
ring_avail = state.capture_ring.available(),
|
||||||
|
capture_overflows = state.capture_ring.overflow_count(),
|
||||||
|
avg_agc_us = avg(t_agc_us),
|
||||||
|
avg_opus_us = avg(t_opus_us),
|
||||||
|
avg_fec_us = avg(t_fec_us),
|
||||||
|
avg_send_us = avg(t_send_us),
|
||||||
|
avg_total_us = avg(t_agc_us + t_opus_us + t_fec_us + t_send_us),
|
||||||
"send stats"
|
"send stats"
|
||||||
);
|
);
|
||||||
|
t_agc_us = 0; t_opus_us = 0; t_fec_us = 0; t_send_us = 0; t_frames = 0;
|
||||||
last_stats_log = Instant::now();
|
last_stats_log = Instant::now();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
info!(frames_sent, frames_dropped, send_errors, "send task ended");
|
info!(frames_sent, frames_dropped, send_errors, "send task ended");
|
||||||
};
|
};
|
||||||
|
|
||||||
// Pre-allocate decode buffer
|
// Pre-allocate decode buffer (max size to handle any incoming codec)
|
||||||
let mut decode_buf = vec![0i16; FRAME_SAMPLES];
|
let mut decode_buf = vec![0i16; MAX_FRAME_SAMPLES];
|
||||||
|
|
||||||
// Recv task: MediaPackets → FEC decode → Opus decode → playout ring
|
// Recv task: MediaPackets → FEC decode → Opus decode → playout ring
|
||||||
let recv_task = async {
|
let recv_task = async {
|
||||||
@@ -495,6 +657,8 @@ async fn run_call(
|
|||||||
let mut last_recv_instant = Instant::now();
|
let mut last_recv_instant = Instant::now();
|
||||||
let mut max_recv_gap_ms: u64 = 0;
|
let mut max_recv_gap_ms: u64 = 0;
|
||||||
let mut last_stats_log = Instant::now();
|
let mut last_stats_log = Instant::now();
|
||||||
|
let mut quality_ctrl = AdaptiveQualityController::new();
|
||||||
|
let mut last_peer_codec: Option<CodecId> = None;
|
||||||
info!("recv task started (Opus + RaptorQ FEC)");
|
info!("recv task started (Opus + RaptorQ FEC)");
|
||||||
loop {
|
loop {
|
||||||
if !state.running.load(Ordering::Relaxed) {
|
if !state.running.load(Ordering::Relaxed) {
|
||||||
@@ -517,6 +681,23 @@ async fn run_call(
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Adaptive quality: ingest quality reports from relay
|
||||||
|
if auto_profile {
|
||||||
|
if let Some(ref qr) = pkt.quality_report {
|
||||||
|
if let Some(new_profile) = quality_ctrl.observe(qr) {
|
||||||
|
let idx = profile_to_index(&new_profile);
|
||||||
|
info!(
|
||||||
|
loss = qr.loss_percent(),
|
||||||
|
rtt = qr.rtt_ms(),
|
||||||
|
tier = ?quality_ctrl.tier(),
|
||||||
|
to = ?new_profile.codec,
|
||||||
|
"auto: quality adapter recommends switch"
|
||||||
|
);
|
||||||
|
pending_profile_recv.store(idx, Ordering::Release);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let is_repair = pkt.header.is_repair;
|
let is_repair = pkt.header.is_repair;
|
||||||
let pkt_block = pkt.header.fec_block;
|
let pkt_block = pkt.header.fec_block;
|
||||||
let pkt_symbol = pkt.header.fec_symbol;
|
let pkt_symbol = pkt.header.fec_symbol;
|
||||||
@@ -530,7 +711,34 @@ async fn run_call(
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Source packets: decode directly
|
// Source packets: decode directly
|
||||||
if !is_repair {
|
if !is_repair && pkt.header.codec_id != CodecId::ComfortNoise {
|
||||||
|
// Switch decoder to match incoming codec if different
|
||||||
|
if pkt.header.codec_id != decoder.codec_id() {
|
||||||
|
let switch_profile = match pkt.header.codec_id {
|
||||||
|
CodecId::Opus24k => QualityProfile::GOOD,
|
||||||
|
CodecId::Opus6k => QualityProfile::DEGRADED,
|
||||||
|
CodecId::Opus32k => QualityProfile::STUDIO_32K,
|
||||||
|
CodecId::Opus48k => QualityProfile::STUDIO_48K,
|
||||||
|
CodecId::Opus64k => QualityProfile::STUDIO_64K,
|
||||||
|
CodecId::Codec2_1200 => QualityProfile::CATASTROPHIC,
|
||||||
|
CodecId::Codec2_3200 => QualityProfile {
|
||||||
|
codec: CodecId::Codec2_3200,
|
||||||
|
fec_ratio: 0.5,
|
||||||
|
frame_duration_ms: 20,
|
||||||
|
frames_per_block: 5,
|
||||||
|
},
|
||||||
|
other => QualityProfile { codec: other, ..QualityProfile::GOOD },
|
||||||
|
};
|
||||||
|
info!(from = ?decoder.codec_id(), to = ?pkt.header.codec_id, "recv: switching decoder");
|
||||||
|
let _ = decoder.set_profile(switch_profile);
|
||||||
|
}
|
||||||
|
// Track peer codec for UI display
|
||||||
|
if last_peer_codec != Some(pkt.header.codec_id) {
|
||||||
|
last_peer_codec = Some(pkt.header.codec_id);
|
||||||
|
if let Ok(mut stats) = state.stats.lock() {
|
||||||
|
stats.peer_codec = format!("{:?}", pkt.header.codec_id);
|
||||||
|
}
|
||||||
|
}
|
||||||
match decoder.decode(&pkt.payload, &mut decode_buf) {
|
match decoder.decode(&pkt.payload, &mut decode_buf) {
|
||||||
Ok(samples) => {
|
Ok(samples) => {
|
||||||
playout_agc.process_frame(&mut decode_buf[..samples]);
|
playout_agc.process_frame(&mut decode_buf[..samples]);
|
||||||
@@ -578,6 +786,8 @@ async fn run_call(
|
|||||||
recv_errors,
|
recv_errors,
|
||||||
max_recv_gap_ms,
|
max_recv_gap_ms,
|
||||||
playout_avail = state.playout_ring.available(),
|
playout_avail = state.playout_ring.available(),
|
||||||
|
playout_overflows = state.playout_ring.overflow_count(),
|
||||||
|
playout_underruns = state.playout_ring.underrun_count(),
|
||||||
"recv stats"
|
"recv stats"
|
||||||
);
|
);
|
||||||
max_recv_gap_ms = 0;
|
max_recv_gap_ms = 0;
|
||||||
@@ -643,6 +853,7 @@ async fn run_call(
|
|||||||
.map(|p| crate::stats::RoomMember {
|
.map(|p| crate::stats::RoomMember {
|
||||||
fingerprint: p.fingerprint.clone(),
|
fingerprint: p.fingerprint.clone(),
|
||||||
alias: p.alias.clone(),
|
alias: p.alias.clone(),
|
||||||
|
relay_label: p.relay_label.clone(),
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
let mut stats = state_signal.stats.lock().unwrap();
|
let mut stats = state_signal.stats.lock().unwrap();
|
||||||
|
|||||||
@@ -21,11 +21,24 @@ unsafe fn handle_ref(handle: jlong) -> &'static mut EngineHandle {
|
|||||||
unsafe { &mut *(handle as *mut EngineHandle) }
|
unsafe { &mut *(handle as *mut EngineHandle) }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// 7 = auto (use relay's chosen profile)
|
||||||
|
const PROFILE_AUTO: jint = 7;
|
||||||
|
|
||||||
fn profile_from_int(value: jint) -> QualityProfile {
|
fn profile_from_int(value: jint) -> QualityProfile {
|
||||||
match value {
|
match value {
|
||||||
1 => QualityProfile::DEGRADED,
|
0 => QualityProfile::GOOD, // Opus 24k
|
||||||
2 => QualityProfile::CATASTROPHIC,
|
1 => QualityProfile::DEGRADED, // Opus 6k
|
||||||
_ => QualityProfile::GOOD,
|
2 => QualityProfile::CATASTROPHIC, // Codec2 1.2k
|
||||||
|
3 => QualityProfile { // Codec2 3.2k
|
||||||
|
codec: wzp_proto::CodecId::Codec2_3200,
|
||||||
|
fec_ratio: 0.5,
|
||||||
|
frame_duration_ms: 20,
|
||||||
|
frames_per_block: 5,
|
||||||
|
},
|
||||||
|
4 => QualityProfile::STUDIO_32K, // Opus 32k
|
||||||
|
5 => QualityProfile::STUDIO_48K, // Opus 48k
|
||||||
|
6 => QualityProfile::STUDIO_64K, // Opus 64k
|
||||||
|
_ => QualityProfile::GOOD, // auto falls back to GOOD
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -35,11 +48,25 @@ static INIT_LOGGING: Once = Once::new();
|
|||||||
/// Safe to call multiple times — only the first call takes effect.
|
/// Safe to call multiple times — only the first call takes effect.
|
||||||
fn init_logging() {
|
fn init_logging() {
|
||||||
INIT_LOGGING.call_once(|| {
|
INIT_LOGGING.call_once(|| {
|
||||||
use tracing_subscriber::layer::SubscriberExt;
|
// Wrap in catch_unwind — sharded_slab allocation inside
|
||||||
use tracing_subscriber::util::SubscriberInitExt;
|
// tracing_subscriber::registry() can crash on some Android
|
||||||
if let Ok(layer) = tracing_android::layer("wzp_android") {
|
// devices if scudo malloc fails during early initialization.
|
||||||
let _ = tracing_subscriber::registry().with(layer).try_init();
|
let _ = std::panic::catch_unwind(|| {
|
||||||
}
|
use tracing_subscriber::layer::SubscriberExt;
|
||||||
|
use tracing_subscriber::util::SubscriberInitExt;
|
||||||
|
use tracing_subscriber::EnvFilter;
|
||||||
|
if let Ok(layer) = tracing_android::layer("wzp_android") {
|
||||||
|
// Filter: INFO for our crates, WARN for everything else.
|
||||||
|
// The jni crate emits VERBOSE logs for every method lookup
|
||||||
|
// (~10 lines per JNI call, 100+ calls/sec) which floods logcat
|
||||||
|
// and causes the system to kill the app.
|
||||||
|
let filter = EnvFilter::new("warn,wzp_android=info,wzp_proto=info,wzp_transport=info,wzp_codec=info,wzp_fec=info,wzp_crypto=info");
|
||||||
|
let _ = tracing_subscriber::registry()
|
||||||
|
.with(layer)
|
||||||
|
.with(filter)
|
||||||
|
.try_init();
|
||||||
|
}
|
||||||
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -50,6 +77,9 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeInit(
|
|||||||
) -> jlong {
|
) -> jlong {
|
||||||
let result = panic::catch_unwind(|| {
|
let result = panic::catch_unwind(|| {
|
||||||
init_logging();
|
init_logging();
|
||||||
|
// Install rustls crypto provider ONCE on the main thread.
|
||||||
|
// Must not be called per-thread — conflicts with Android's system libcrypto.so TLS keys.
|
||||||
|
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||||
let handle = Box::new(EngineHandle {
|
let handle = Box::new(EngineHandle {
|
||||||
engine: WzpEngine::new(),
|
engine: WzpEngine::new(),
|
||||||
});
|
});
|
||||||
@@ -71,6 +101,7 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeStartCall(
|
|||||||
seed_hex_j: JString,
|
seed_hex_j: JString,
|
||||||
token_j: JString,
|
token_j: JString,
|
||||||
alias_j: JString,
|
alias_j: JString,
|
||||||
|
profile_j: jint,
|
||||||
) -> jint {
|
) -> jint {
|
||||||
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||||
let relay_addr: String = env.get_string(&relay_addr_j).map(|s| s.into()).unwrap_or_default();
|
let relay_addr: String = env.get_string(&relay_addr_j).map(|s| s.into()).unwrap_or_default();
|
||||||
@@ -96,7 +127,8 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeStartCall(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let config = CallStartConfig {
|
let config = CallStartConfig {
|
||||||
profile: QualityProfile::GOOD,
|
profile: profile_from_int(profile_j),
|
||||||
|
auto_profile: profile_j == PROFILE_AUTO,
|
||||||
relay_addr,
|
relay_addr,
|
||||||
room,
|
room,
|
||||||
auth_token: if token.is_empty() { Vec::new() } else { token.into_bytes() },
|
auth_token: if token.is_empty() { Vec::new() } else { token.into_bytes() },
|
||||||
@@ -209,7 +241,6 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeWriteAudio(
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
let mut buf = vec![0i16; len];
|
let mut buf = vec![0i16; len];
|
||||||
// GetShortArrayRegion copies Java array into our buffer
|
|
||||||
if env.get_short_array_region(&pcm, 0, &mut buf).is_err() {
|
if env.get_short_array_region(&pcm, 0, &mut buf).is_err() {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -243,6 +274,56 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeReadAudio(
|
|||||||
result.unwrap_or(0)
|
result.unwrap_or(0)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Write captured PCM from a DirectByteBuffer — zero JNI array copies.
|
||||||
|
/// The ByteBuffer must contain little-endian i16 samples.
|
||||||
|
/// Called from the AudioRecord capture thread.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeWriteAudioDirect(
|
||||||
|
env: JNIEnv,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
buffer: jni::objects::JByteBuffer,
|
||||||
|
sample_count: jint,
|
||||||
|
) -> jint {
|
||||||
|
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||||
|
let h = unsafe { handle_ref(handle) };
|
||||||
|
let ptr = env.get_direct_buffer_address(&buffer).unwrap_or(std::ptr::null_mut());
|
||||||
|
if ptr.is_null() || sample_count <= 0 {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
let samples = unsafe {
|
||||||
|
std::slice::from_raw_parts(ptr as *const i16, sample_count as usize)
|
||||||
|
};
|
||||||
|
h.engine.write_audio(samples) as jint
|
||||||
|
}));
|
||||||
|
result.unwrap_or(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read decoded PCM into a DirectByteBuffer — zero JNI array copies.
|
||||||
|
/// The ByteBuffer will be filled with little-endian i16 samples.
|
||||||
|
/// Called from the AudioTrack playout thread.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeReadAudioDirect(
|
||||||
|
env: JNIEnv,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
buffer: jni::objects::JByteBuffer,
|
||||||
|
max_samples: jint,
|
||||||
|
) -> jint {
|
||||||
|
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||||
|
let h = unsafe { handle_ref(handle) };
|
||||||
|
let ptr = env.get_direct_buffer_address(&buffer).unwrap_or(std::ptr::null_mut());
|
||||||
|
if ptr.is_null() || max_samples <= 0 {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
let samples = unsafe {
|
||||||
|
std::slice::from_raw_parts_mut(ptr as *mut i16, max_samples as usize)
|
||||||
|
};
|
||||||
|
h.engine.read_audio(samples) as jint
|
||||||
|
}));
|
||||||
|
result.unwrap_or(0)
|
||||||
|
}
|
||||||
|
|
||||||
#[unsafe(no_mangle)]
|
#[unsafe(no_mangle)]
|
||||||
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeDestroy(
|
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeDestroy(
|
||||||
_env: JNIEnv,
|
_env: JNIEnv,
|
||||||
@@ -254,3 +335,177 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeDestroy(
|
|||||||
drop(h);
|
drop(h);
|
||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Ping a relay server — instance method, requires engine handle.
|
||||||
|
/// Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or null on failure.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativePingRelay<'a>(
|
||||||
|
mut env: JNIEnv<'a>,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
relay_j: JString,
|
||||||
|
) -> jstring {
|
||||||
|
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||||
|
let h = unsafe { handle_ref(handle) };
|
||||||
|
let relay: String = env.get_string(&relay_j).map(|s| s.into()).unwrap_or_default();
|
||||||
|
match h.engine.ping_relay(&relay) {
|
||||||
|
Ok(json) => Some(json),
|
||||||
|
Err(_) => None,
|
||||||
|
}
|
||||||
|
}));
|
||||||
|
|
||||||
|
let json = match result {
|
||||||
|
Ok(Some(s)) => s,
|
||||||
|
_ => return JObject::null().into_raw(),
|
||||||
|
};
|
||||||
|
env.new_string(&json)
|
||||||
|
.map(|s| s.into_raw())
|
||||||
|
.unwrap_or(JObject::null().into_raw())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the identity fingerprint for a seed hex string.
|
||||||
|
/// Returns the full fingerprint (xxxx:xxxx:...) or empty string on error.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeGetFingerprint<'a>(
|
||||||
|
mut env: JNIEnv<'a>,
|
||||||
|
_class: JClass,
|
||||||
|
seed_hex_j: JString,
|
||||||
|
) -> jstring {
|
||||||
|
let seed_hex: String = env.get_string(&seed_hex_j).map(|s| s.into()).unwrap_or_default();
|
||||||
|
let fp = if seed_hex.is_empty() {
|
||||||
|
String::new()
|
||||||
|
} else {
|
||||||
|
match wzp_crypto::Seed::from_hex(&seed_hex) {
|
||||||
|
Ok(seed) => {
|
||||||
|
let id = seed.derive_identity();
|
||||||
|
id.public_identity().fingerprint.to_string()
|
||||||
|
}
|
||||||
|
Err(_) => String::new(),
|
||||||
|
}
|
||||||
|
};
|
||||||
|
env.new_string(&fp)
|
||||||
|
.map(|s| s.into_raw())
|
||||||
|
.unwrap_or(JObject::null().into_raw())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Direct calling JNI functions ──
|
||||||
|
|
||||||
|
// ── SignalManager JNI functions ──
|
||||||
|
|
||||||
|
/// Opaque handle for SignalManager (separate from EngineHandle).
|
||||||
|
struct SignalHandle {
|
||||||
|
mgr: crate::signal_mgr::SignalManager,
|
||||||
|
}
|
||||||
|
|
||||||
|
unsafe fn signal_ref(handle: jlong) -> &'static SignalHandle {
|
||||||
|
unsafe { &*(handle as *const SignalHandle) }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Connect to relay for signaling. Returns handle (jlong) or 0 on error.
|
||||||
|
/// Blocks up to 10s waiting for the internal signal thread to connect.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_SignalManager_nativeSignalConnect<'a>(
|
||||||
|
mut env: JNIEnv<'a>,
|
||||||
|
_class: JClass,
|
||||||
|
relay_j: JString,
|
||||||
|
seed_j: JString,
|
||||||
|
) -> jlong {
|
||||||
|
info!("nativeSignalConnect: entered");
|
||||||
|
let relay: String = env.get_string(&relay_j).map(|s| s.into()).unwrap_or_default();
|
||||||
|
let seed: String = env.get_string(&seed_j).map(|s| s.into()).unwrap_or_default();
|
||||||
|
info!(relay = %relay, seed_len = seed.len(), "nativeSignalConnect: parsed strings");
|
||||||
|
|
||||||
|
// start() spawns an internal thread (connect+register+recv, ONE runtime, never dropped).
|
||||||
|
// Blocks up to 10s waiting for the connect+register to complete.
|
||||||
|
match crate::signal_mgr::SignalManager::start(&relay, &seed) {
|
||||||
|
Ok(mgr) => {
|
||||||
|
let handle = Box::new(SignalHandle { mgr });
|
||||||
|
Box::into_raw(handle) as jlong
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("signal connect failed: {e}");
|
||||||
|
0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get signal state as JSON string.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_SignalManager_nativeSignalGetState<'a>(
|
||||||
|
mut env: JNIEnv<'a>,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
) -> jstring {
|
||||||
|
if handle == 0 { return JObject::null().into_raw(); }
|
||||||
|
let h = signal_ref(handle);
|
||||||
|
let json = h.mgr.get_state_json();
|
||||||
|
env.new_string(&json)
|
||||||
|
.map(|s| s.into_raw())
|
||||||
|
.unwrap_or(JObject::null().into_raw())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Place a direct call.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_SignalManager_nativeSignalPlaceCall<'a>(
|
||||||
|
mut env: JNIEnv<'a>,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
target_j: JString,
|
||||||
|
) -> jint {
|
||||||
|
if handle == 0 { return -1; }
|
||||||
|
let h = signal_ref(handle);
|
||||||
|
let target: String = env.get_string(&target_j).map(|s| s.into()).unwrap_or_default();
|
||||||
|
match h.mgr.place_call(&target) {
|
||||||
|
Ok(()) => 0,
|
||||||
|
Err(e) => { error!("place_call: {e}"); -1 }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Answer an incoming call.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_SignalManager_nativeSignalAnswerCall<'a>(
|
||||||
|
mut env: JNIEnv<'a>,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
call_id_j: JString,
|
||||||
|
mode: jint,
|
||||||
|
) -> jint {
|
||||||
|
if handle == 0 { return -1; }
|
||||||
|
let h = signal_ref(handle);
|
||||||
|
let call_id: String = env.get_string(&call_id_j).map(|s| s.into()).unwrap_or_default();
|
||||||
|
let accept_mode = match mode {
|
||||||
|
0 => wzp_proto::CallAcceptMode::Reject,
|
||||||
|
1 => wzp_proto::CallAcceptMode::AcceptTrusted,
|
||||||
|
_ => wzp_proto::CallAcceptMode::AcceptGeneric,
|
||||||
|
};
|
||||||
|
match h.mgr.answer_call(&call_id, accept_mode) {
|
||||||
|
Ok(()) => 0,
|
||||||
|
Err(e) => { error!("answer_call: {e}"); -1 }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Send hangup signal.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_SignalManager_nativeSignalHangup(
|
||||||
|
_env: JNIEnv,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
) {
|
||||||
|
if handle == 0 { return; }
|
||||||
|
let h = signal_ref(handle);
|
||||||
|
h.mgr.hangup();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Destroy the signal manager and free resources.
|
||||||
|
#[unsafe(no_mangle)]
|
||||||
|
pub unsafe extern "system" fn Java_com_wzp_engine_SignalManager_nativeSignalDestroy(
|
||||||
|
_env: JNIEnv,
|
||||||
|
_class: JClass,
|
||||||
|
handle: jlong,
|
||||||
|
) {
|
||||||
|
if handle == 0 { return; }
|
||||||
|
let h = signal_ref(handle);
|
||||||
|
h.mgr.stop();
|
||||||
|
// Reclaim the Box
|
||||||
|
let _ = unsafe { Box::from_raw(handle as *mut SignalHandle) };
|
||||||
|
}
|
||||||
|
|||||||
@@ -14,5 +14,6 @@ pub mod audio_ring;
|
|||||||
pub mod commands;
|
pub mod commands;
|
||||||
pub mod engine;
|
pub mod engine;
|
||||||
pub mod pipeline;
|
pub mod pipeline;
|
||||||
|
pub mod signal_mgr;
|
||||||
pub mod stats;
|
pub mod stats;
|
||||||
pub mod jni_bridge;
|
pub mod jni_bridge;
|
||||||
|
|||||||
288
crates/wzp-android/src/signal_mgr.rs
Normal file
288
crates/wzp-android/src/signal_mgr.rs
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
//! Persistent signal connection manager for direct 1:1 calls.
|
||||||
|
//!
|
||||||
|
//! Separate from the media engine — survives across calls.
|
||||||
|
//! Connects to relay via `_signal` SNI, registers presence,
|
||||||
|
//! and handles call signaling (offer/answer/setup/hangup).
|
||||||
|
|
||||||
|
use std::net::SocketAddr;
|
||||||
|
use std::sync::atomic::{AtomicBool, Ordering};
|
||||||
|
use std::sync::{Arc, Mutex};
|
||||||
|
|
||||||
|
use tracing::{error, info, warn};
|
||||||
|
use wzp_proto::{MediaTransport, SignalMessage};
|
||||||
|
|
||||||
|
/// Signal connection status.
|
||||||
|
#[derive(Clone, Debug, Default, serde::Serialize)]
|
||||||
|
pub struct SignalState {
|
||||||
|
pub status: String, // "idle", "registered", "ringing", "incoming", "setup"
|
||||||
|
pub fingerprint: String,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub incoming_call_id: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub incoming_caller_fp: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub incoming_caller_alias: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub call_setup_relay: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub call_setup_room: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub call_setup_id: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Manages a persistent `_signal` QUIC connection to a relay.
|
||||||
|
pub struct SignalManager {
|
||||||
|
transport: Arc<wzp_transport::QuinnTransport>,
|
||||||
|
state: Arc<Mutex<SignalState>>,
|
||||||
|
running: Arc<AtomicBool>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SignalManager {
|
||||||
|
/// Create SignalManager and start connect+register+recv on a background thread.
|
||||||
|
/// Returns immediately. The internal thread runs forever.
|
||||||
|
/// CRITICAL: tokio runtime must never be dropped on Android (libcrypto TLS conflict).
|
||||||
|
pub fn start(relay_addr: &str, seed_hex: &str) -> Result<Self, anyhow::Error> {
|
||||||
|
let addr: SocketAddr = relay_addr.parse()?;
|
||||||
|
let seed = if seed_hex.is_empty() {
|
||||||
|
wzp_crypto::Seed::generate()
|
||||||
|
} else {
|
||||||
|
wzp_crypto::Seed::from_hex(seed_hex).map_err(|e| anyhow::anyhow!(e))?
|
||||||
|
};
|
||||||
|
let identity = seed.derive_identity();
|
||||||
|
let pub_id = identity.public_identity();
|
||||||
|
let identity_pub = *pub_id.signing.as_bytes();
|
||||||
|
let fp = pub_id.fingerprint.to_string();
|
||||||
|
|
||||||
|
let state = Arc::new(Mutex::new(SignalState {
|
||||||
|
status: "connecting".into(),
|
||||||
|
fingerprint: fp.clone(),
|
||||||
|
..Default::default()
|
||||||
|
}));
|
||||||
|
let running = Arc::new(AtomicBool::new(true));
|
||||||
|
|
||||||
|
// Channel to receive transport after connect succeeds
|
||||||
|
let (transport_tx, transport_rx) = std::sync::mpsc::channel();
|
||||||
|
|
||||||
|
let bg_state = Arc::clone(&state);
|
||||||
|
let bg_running = Arc::clone(&running);
|
||||||
|
let ret_state = Arc::clone(&state);
|
||||||
|
let ret_running = Arc::clone(&running);
|
||||||
|
|
||||||
|
// ONE thread, ONE runtime, NEVER dropped.
|
||||||
|
// Connect + register + recv loop all happen here.
|
||||||
|
std::thread::Builder::new()
|
||||||
|
.name("wzp-signal".into())
|
||||||
|
.stack_size(4 * 1024 * 1024)
|
||||||
|
.spawn(move || {
|
||||||
|
let rt = tokio::runtime::Builder::new_current_thread()
|
||||||
|
.enable_all()
|
||||||
|
.build()
|
||||||
|
.expect("tokio runtime");
|
||||||
|
|
||||||
|
rt.block_on(async move {
|
||||||
|
info!(fingerprint = %fp, relay = %addr, "signal: connecting");
|
||||||
|
|
||||||
|
let bind: SocketAddr = "0.0.0.0:0".parse().unwrap();
|
||||||
|
let endpoint = match wzp_transport::create_endpoint(bind, None) {
|
||||||
|
Ok(e) => e,
|
||||||
|
Err(e) => {
|
||||||
|
error!("signal endpoint: {e}");
|
||||||
|
bg_state.lock().unwrap().status = "idle".into();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let client_cfg = wzp_transport::client_config();
|
||||||
|
let conn = match wzp_transport::connect(&endpoint, addr, "_signal", client_cfg).await {
|
||||||
|
Ok(c) => c,
|
||||||
|
Err(e) => {
|
||||||
|
error!("signal connect: {e}");
|
||||||
|
bg_state.lock().unwrap().status = "idle".into();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let transport = Arc::new(wzp_transport::QuinnTransport::new(conn));
|
||||||
|
|
||||||
|
// Register
|
||||||
|
if let Err(e) = transport.send_signal(&SignalMessage::RegisterPresence {
|
||||||
|
identity_pub, signature: vec![], alias: None,
|
||||||
|
}).await {
|
||||||
|
error!("signal register: {e}");
|
||||||
|
bg_state.lock().unwrap().status = "idle".into();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
match transport.recv_signal().await {
|
||||||
|
Ok(Some(SignalMessage::RegisterPresenceAck { success: true, .. })) => {
|
||||||
|
info!(fingerprint = %fp, "signal: registered");
|
||||||
|
bg_state.lock().unwrap().status = "registered".into();
|
||||||
|
// Send transport to caller
|
||||||
|
let _ = transport_tx.send(transport.clone());
|
||||||
|
}
|
||||||
|
other => {
|
||||||
|
error!("signal registration failed: {other:?}");
|
||||||
|
bg_state.lock().unwrap().status = "idle".into();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Recv loop — runs forever
|
||||||
|
loop {
|
||||||
|
if !running.load(Ordering::Relaxed) { break; }
|
||||||
|
|
||||||
|
match transport.recv_signal().await {
|
||||||
|
Ok(Some(SignalMessage::CallRinging { call_id })) => {
|
||||||
|
info!(call_id = %call_id, "signal: ringing");
|
||||||
|
let mut s = state.lock().unwrap();
|
||||||
|
s.status = "ringing".into();
|
||||||
|
}
|
||||||
|
Ok(Some(SignalMessage::DirectCallOffer { caller_fingerprint, caller_alias, call_id, .. })) => {
|
||||||
|
info!(from = %caller_fingerprint, call_id = %call_id, "signal: incoming call");
|
||||||
|
let mut s = state.lock().unwrap();
|
||||||
|
s.status = "incoming".into();
|
||||||
|
s.incoming_call_id = Some(call_id);
|
||||||
|
s.incoming_caller_fp = Some(caller_fingerprint);
|
||||||
|
s.incoming_caller_alias = caller_alias;
|
||||||
|
}
|
||||||
|
Ok(Some(SignalMessage::DirectCallAnswer { call_id, accept_mode, .. })) => {
|
||||||
|
info!(call_id = %call_id, mode = ?accept_mode, "signal: call answered");
|
||||||
|
}
|
||||||
|
Ok(Some(SignalMessage::CallSetup { call_id, room, relay_addr })) => {
|
||||||
|
info!(call_id = %call_id, room = %room, relay = %relay_addr, "signal: call setup");
|
||||||
|
let mut s = state.lock().unwrap();
|
||||||
|
s.status = "setup".into();
|
||||||
|
s.call_setup_relay = Some(relay_addr);
|
||||||
|
s.call_setup_room = Some(room);
|
||||||
|
s.call_setup_id = Some(call_id);
|
||||||
|
}
|
||||||
|
Ok(Some(SignalMessage::Hangup { reason })) => {
|
||||||
|
info!(reason = ?reason, "signal: hangup");
|
||||||
|
let mut s = state.lock().unwrap();
|
||||||
|
s.status = "registered".into();
|
||||||
|
s.incoming_call_id = None;
|
||||||
|
s.incoming_caller_fp = None;
|
||||||
|
s.incoming_caller_alias = None;
|
||||||
|
s.call_setup_relay = None;
|
||||||
|
s.call_setup_room = None;
|
||||||
|
s.call_setup_id = None;
|
||||||
|
}
|
||||||
|
Ok(Some(_)) => {}
|
||||||
|
Ok(None) => {
|
||||||
|
info!("signal: connection closed");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("signal recv error: {e}");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
bg_state.lock().unwrap().status = "idle".into();
|
||||||
|
}); // block_on
|
||||||
|
|
||||||
|
// Runtime intentionally NOT dropped — lives until thread exits.
|
||||||
|
// This prevents ring/libcrypto TLS cleanup conflict on Android.
|
||||||
|
// The thread is parked here forever (block_on returned = connection lost).
|
||||||
|
std::thread::park();
|
||||||
|
})?; // thread spawn
|
||||||
|
|
||||||
|
// Wait for transport (up to 10s)
|
||||||
|
let transport = transport_rx.recv_timeout(std::time::Duration::from_secs(10))
|
||||||
|
.map_err(|_| anyhow::anyhow!("signal connect timeout — check relay address"))?;
|
||||||
|
|
||||||
|
Ok(Self { transport, state: ret_state, running: ret_running })
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get current state (non-blocking).
|
||||||
|
pub fn get_state(&self) -> SignalState {
|
||||||
|
self.state.lock().unwrap().clone()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get state as JSON string.
|
||||||
|
pub fn get_state_json(&self) -> String {
|
||||||
|
serde_json::to_string(&self.get_state()).unwrap_or_else(|_| "{}".into())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Place a direct call.
|
||||||
|
pub fn place_call(&self, target_fp: &str) -> Result<(), anyhow::Error> {
|
||||||
|
let fp = self.state.lock().unwrap().fingerprint.clone();
|
||||||
|
let target = target_fp.to_string();
|
||||||
|
let call_id = format!("{:016x}", std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH).unwrap().as_nanos());
|
||||||
|
let transport = self.transport.clone();
|
||||||
|
|
||||||
|
// Send on a small thread (async send needs a runtime)
|
||||||
|
std::thread::Builder::new()
|
||||||
|
.name("wzp-call-send".into())
|
||||||
|
.spawn(move || {
|
||||||
|
let rt = tokio::runtime::Builder::new_current_thread()
|
||||||
|
.enable_all().build().expect("rt");
|
||||||
|
rt.block_on(async {
|
||||||
|
let _ = transport.send_signal(&SignalMessage::DirectCallOffer {
|
||||||
|
caller_fingerprint: fp,
|
||||||
|
caller_alias: None,
|
||||||
|
target_fingerprint: target,
|
||||||
|
call_id,
|
||||||
|
identity_pub: [0u8; 32],
|
||||||
|
ephemeral_pub: [0u8; 32],
|
||||||
|
signature: vec![],
|
||||||
|
supported_profiles: vec![wzp_proto::QualityProfile::GOOD],
|
||||||
|
}).await;
|
||||||
|
});
|
||||||
|
})?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Answer an incoming call.
|
||||||
|
pub fn answer_call(&self, call_id: &str, mode: wzp_proto::CallAcceptMode) -> Result<(), anyhow::Error> {
|
||||||
|
let call_id = call_id.to_string();
|
||||||
|
let transport = self.transport.clone();
|
||||||
|
|
||||||
|
std::thread::Builder::new()
|
||||||
|
.name("wzp-answer-send".into())
|
||||||
|
.spawn(move || {
|
||||||
|
let rt = tokio::runtime::Builder::new_current_thread()
|
||||||
|
.enable_all().build().expect("rt");
|
||||||
|
rt.block_on(async {
|
||||||
|
let _ = transport.send_signal(&SignalMessage::DirectCallAnswer {
|
||||||
|
call_id,
|
||||||
|
accept_mode: mode,
|
||||||
|
identity_pub: None,
|
||||||
|
ephemeral_pub: None,
|
||||||
|
signature: None,
|
||||||
|
chosen_profile: Some(wzp_proto::QualityProfile::GOOD),
|
||||||
|
}).await;
|
||||||
|
});
|
||||||
|
})?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Send hangup.
|
||||||
|
pub fn hangup(&self) {
|
||||||
|
let transport = self.transport.clone();
|
||||||
|
let state = self.state.clone();
|
||||||
|
std::thread::spawn(move || {
|
||||||
|
let rt = tokio::runtime::Builder::new_current_thread()
|
||||||
|
.enable_all().build().expect("rt");
|
||||||
|
rt.block_on(async {
|
||||||
|
let _ = transport.send_signal(&SignalMessage::Hangup {
|
||||||
|
reason: wzp_proto::HangupReason::Normal,
|
||||||
|
}).await;
|
||||||
|
});
|
||||||
|
let mut s = state.lock().unwrap();
|
||||||
|
s.status = "registered".into();
|
||||||
|
s.incoming_call_id = None;
|
||||||
|
s.incoming_caller_fp = None;
|
||||||
|
s.incoming_caller_alias = None;
|
||||||
|
s.call_setup_relay = None;
|
||||||
|
s.call_setup_room = None;
|
||||||
|
s.call_setup_id = None;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Stop the signal connection.
|
||||||
|
pub fn stop(&self) {
|
||||||
|
self.running.store(false, Ordering::Release);
|
||||||
|
self.transport.connection().close(0u32.into(), b"shutdown");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -11,6 +11,12 @@ pub enum CallState {
|
|||||||
Active,
|
Active,
|
||||||
Reconnecting,
|
Reconnecting,
|
||||||
Closed,
|
Closed,
|
||||||
|
/// Connected to relay signal channel, registered for direct calls.
|
||||||
|
Registered,
|
||||||
|
/// Outgoing call ringing on callee's side.
|
||||||
|
Ringing,
|
||||||
|
/// Incoming call received, waiting for user to accept/reject.
|
||||||
|
IncomingCall,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl serde::Serialize for CallState {
|
impl serde::Serialize for CallState {
|
||||||
@@ -21,6 +27,9 @@ impl serde::Serialize for CallState {
|
|||||||
CallState::Active => 2,
|
CallState::Active => 2,
|
||||||
CallState::Reconnecting => 3,
|
CallState::Reconnecting => 3,
|
||||||
CallState::Closed => 4,
|
CallState::Closed => 4,
|
||||||
|
CallState::Registered => 5,
|
||||||
|
CallState::Ringing => 6,
|
||||||
|
CallState::IncomingCall => 7,
|
||||||
};
|
};
|
||||||
serializer.serialize_u8(n)
|
serializer.serialize_u8(n)
|
||||||
}
|
}
|
||||||
@@ -51,12 +60,36 @@ pub struct CallStats {
|
|||||||
pub underruns: u64,
|
pub underruns: u64,
|
||||||
/// Frames recovered by FEC.
|
/// Frames recovered by FEC.
|
||||||
pub fec_recovered: u64,
|
pub fec_recovered: u64,
|
||||||
|
/// Playout ring overflow count (reader was lapped by writer).
|
||||||
|
pub playout_overflows: u64,
|
||||||
|
/// Playout ring underrun count (reader found empty buffer).
|
||||||
|
pub playout_underruns: u64,
|
||||||
|
/// Capture ring overflow count.
|
||||||
|
pub capture_overflows: u64,
|
||||||
/// Current mic audio level (RMS of i16 samples, 0-32767).
|
/// Current mic audio level (RMS of i16 samples, 0-32767).
|
||||||
pub audio_level: u32,
|
pub audio_level: u32,
|
||||||
|
/// Our current outgoing codec name (e.g. "Opus24k", "Codec2_1200").
|
||||||
|
pub current_codec: String,
|
||||||
|
/// Last seen incoming codec from other participants.
|
||||||
|
pub peer_codec: String,
|
||||||
|
/// Whether auto quality mode is active.
|
||||||
|
pub auto_mode: bool,
|
||||||
/// Number of participants in the room (from last RoomUpdate).
|
/// Number of participants in the room (from last RoomUpdate).
|
||||||
pub room_participant_count: u32,
|
pub room_participant_count: u32,
|
||||||
/// Participant list (fingerprint + optional alias) serialized as JSON array.
|
/// Participant list (fingerprint + optional alias) serialized as JSON array.
|
||||||
pub room_participants: Vec<RoomMember>,
|
pub room_participants: Vec<RoomMember>,
|
||||||
|
/// SAS code for verbal verification (None if not in a call).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub sas_code: Option<u32>,
|
||||||
|
/// Incoming call info (present when state == IncomingCall).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub incoming_call_id: Option<String>,
|
||||||
|
/// Fingerprint of the caller (present when state == IncomingCall).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub incoming_caller_fp: Option<String>,
|
||||||
|
/// Alias of the caller (present when state == IncomingCall).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub incoming_caller_alias: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A room member entry, serialized into the stats JSON.
|
/// A room member entry, serialized into the stats JSON.
|
||||||
@@ -64,4 +97,5 @@ pub struct CallStats {
|
|||||||
pub struct RoomMember {
|
pub struct RoomMember {
|
||||||
pub fingerprint: String,
|
pub fingerprint: String,
|
||||||
pub alias: Option<String>,
|
pub alias: Option<String>,
|
||||||
|
pub relay_label: Option<String>,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -47,6 +47,11 @@ struct CliArgs {
|
|||||||
room: Option<String>,
|
room: Option<String>,
|
||||||
token: Option<String>,
|
token: Option<String>,
|
||||||
_metrics_file: Option<String>,
|
_metrics_file: Option<String>,
|
||||||
|
version_check: bool,
|
||||||
|
/// Connect to relay for persistent signaling (direct calls).
|
||||||
|
signal: bool,
|
||||||
|
/// Place a direct call to a fingerprint (requires --signal).
|
||||||
|
call_target: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl CliArgs {
|
impl CliArgs {
|
||||||
@@ -88,12 +93,20 @@ fn parse_args() -> CliArgs {
|
|||||||
let mut room = None;
|
let mut room = None;
|
||||||
let mut token = None;
|
let mut token = None;
|
||||||
let mut metrics_file = None;
|
let mut metrics_file = None;
|
||||||
|
let mut version_check = false;
|
||||||
let mut relay_str = None;
|
let mut relay_str = None;
|
||||||
|
let mut signal = false;
|
||||||
|
let mut call_target = None;
|
||||||
|
|
||||||
let mut i = 1;
|
let mut i = 1;
|
||||||
while i < args.len() {
|
while i < args.len() {
|
||||||
match args[i].as_str() {
|
match args[i].as_str() {
|
||||||
"--live" => live = true,
|
"--live" => live = true,
|
||||||
|
"--signal" => signal = true,
|
||||||
|
"--call" => {
|
||||||
|
i += 1;
|
||||||
|
call_target = Some(args.get(i).expect("--call requires a fingerprint").to_string());
|
||||||
|
}
|
||||||
"--send-tone" => {
|
"--send-tone" => {
|
||||||
i += 1;
|
i += 1;
|
||||||
send_tone_secs = Some(
|
send_tone_secs = Some(
|
||||||
@@ -169,6 +182,7 @@ fn parse_args() -> CliArgs {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
"--sweep" => sweep = true,
|
"--sweep" => sweep = true,
|
||||||
|
"--version-check" => { version_check = true; }
|
||||||
"--help" | "-h" => {
|
"--help" | "-h" => {
|
||||||
eprintln!("Usage: wzp-client [options] [relay-addr]");
|
eprintln!("Usage: wzp-client [options] [relay-addr]");
|
||||||
eprintln!();
|
eprintln!();
|
||||||
@@ -221,6 +235,9 @@ fn parse_args() -> CliArgs {
|
|||||||
room,
|
room,
|
||||||
token,
|
token,
|
||||||
_metrics_file: metrics_file,
|
_metrics_file: metrics_file,
|
||||||
|
version_check,
|
||||||
|
signal,
|
||||||
|
call_target,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -239,6 +256,32 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --version-check: query relay version over QUIC and exit
|
||||||
|
if cli.version_check {
|
||||||
|
let client_config = wzp_transport::client_config();
|
||||||
|
let bind_addr: SocketAddr = "0.0.0.0:0".parse()?;
|
||||||
|
let endpoint = wzp_transport::create_endpoint(bind_addr, None)?;
|
||||||
|
let conn = wzp_transport::connect(&endpoint, cli.relay_addr, "version", client_config).await?;
|
||||||
|
match conn.accept_uni().await {
|
||||||
|
Ok(mut recv) => {
|
||||||
|
let data = recv.read_to_end(256).await.unwrap_or_default();
|
||||||
|
let version = String::from_utf8_lossy(&data);
|
||||||
|
println!("{} {}", cli.relay_addr, version.trim());
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("relay {} does not support version query: {e}", cli.relay_addr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
endpoint.close(0u32.into(), b"done");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// --signal mode: persistent signaling for direct calls
|
||||||
|
if cli.signal {
|
||||||
|
let seed = cli.resolve_seed();
|
||||||
|
return run_signal_mode(cli.relay_addr, seed, cli.token, cli.call_target).await;
|
||||||
|
}
|
||||||
|
|
||||||
let seed = cli.resolve_seed();
|
let seed = cli.resolve_seed();
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
@@ -250,12 +293,11 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
"WarzonePhone client"
|
"WarzonePhone client"
|
||||||
);
|
);
|
||||||
|
|
||||||
// Hash room name for SNI privacy (or "default" if none specified)
|
// Use raw room name as SNI (consistent with Android + Desktop clients for federation)
|
||||||
let sni = match &cli.room {
|
let sni = match &cli.room {
|
||||||
Some(name) => {
|
Some(name) => {
|
||||||
let hashed = wzp_crypto::hash_room_name(name);
|
info!(room = %name, "using room name as SNI");
|
||||||
info!(room = %name, hashed = %hashed, "room name hashed for SNI");
|
name.clone()
|
||||||
hashed
|
|
||||||
}
|
}
|
||||||
None => "default".to_string(),
|
None => "default".to_string(),
|
||||||
};
|
};
|
||||||
@@ -274,6 +316,26 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
|
|
||||||
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
|
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
|
||||||
|
|
||||||
|
// Register shutdown handler so SIGTERM/SIGINT always closes QUIC cleanly.
|
||||||
|
// Without this, killed clients leave zombie connections on the relay for ~30s.
|
||||||
|
{
|
||||||
|
let shutdown_transport = transport.clone();
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let mut sigterm = tokio::signal::unix::signal(tokio::signal::unix::SignalKind::terminate())
|
||||||
|
.expect("failed to register SIGTERM handler");
|
||||||
|
let mut sigint = tokio::signal::unix::signal(tokio::signal::unix::SignalKind::interrupt())
|
||||||
|
.expect("failed to register SIGINT handler");
|
||||||
|
tokio::select! {
|
||||||
|
_ = sigterm.recv() => { info!("SIGTERM received, closing connection..."); }
|
||||||
|
_ = sigint.recv() => { info!("SIGINT received, closing connection..."); }
|
||||||
|
}
|
||||||
|
// Close the QUIC connection immediately (APPLICATION_CLOSE frame).
|
||||||
|
// Don't call process::exit — let the main task detect the closed
|
||||||
|
// connection and perform clean shutdown (e.g., save recordings).
|
||||||
|
shutdown_transport.connection().close(0u32.into(), b"shutdown");
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
// Send auth token if provided (relay with --auth-url expects this first)
|
// Send auth token if provided (relay with --auth-url expects this first)
|
||||||
if let Some(ref token) = cli.token {
|
if let Some(ref token) = cli.token {
|
||||||
let auth = wzp_proto::SignalMessage::AuthToken {
|
let auth = wzp_proto::SignalMessage::AuthToken {
|
||||||
@@ -624,3 +686,195 @@ async fn run_live(transport: Arc<wzp_transport::QuinnTransport>) -> anyhow::Resu
|
|||||||
info!("done");
|
info!("done");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Persistent signaling mode for direct 1:1 calls.
|
||||||
|
async fn run_signal_mode(
|
||||||
|
relay_addr: SocketAddr,
|
||||||
|
seed: wzp_crypto::Seed,
|
||||||
|
token: Option<String>,
|
||||||
|
call_target: Option<String>,
|
||||||
|
) -> anyhow::Result<()> {
|
||||||
|
use wzp_proto::SignalMessage;
|
||||||
|
|
||||||
|
let identity = seed.derive_identity();
|
||||||
|
let pub_id = identity.public_identity();
|
||||||
|
let fp = pub_id.fingerprint.to_string();
|
||||||
|
let identity_pub = *pub_id.signing.as_bytes();
|
||||||
|
info!(fingerprint = %fp, "signal mode");
|
||||||
|
|
||||||
|
// Connect to relay with SNI "_signal"
|
||||||
|
let client_config = wzp_transport::client_config();
|
||||||
|
let bind_addr: SocketAddr = if relay_addr.is_ipv6() {
|
||||||
|
"[::]:0".parse()?
|
||||||
|
} else {
|
||||||
|
"0.0.0.0:0".parse()?
|
||||||
|
};
|
||||||
|
let endpoint = wzp_transport::create_endpoint(bind_addr, None)?;
|
||||||
|
let conn = wzp_transport::connect(&endpoint, relay_addr, "_signal", client_config).await?;
|
||||||
|
let transport = Arc::new(wzp_transport::QuinnTransport::new(conn));
|
||||||
|
info!("connected to relay (signal channel)");
|
||||||
|
|
||||||
|
// Auth if token provided
|
||||||
|
if let Some(ref tok) = token {
|
||||||
|
transport.send_signal(&SignalMessage::AuthToken { token: tok.clone() }).await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Register presence (signature not verified in Phase 1)
|
||||||
|
transport.send_signal(&SignalMessage::RegisterPresence {
|
||||||
|
identity_pub,
|
||||||
|
signature: vec![], // Phase 1: not verified
|
||||||
|
alias: None,
|
||||||
|
}).await?;
|
||||||
|
|
||||||
|
// Wait for ack
|
||||||
|
match transport.recv_signal().await? {
|
||||||
|
Some(SignalMessage::RegisterPresenceAck { success: true, .. }) => {
|
||||||
|
info!(fingerprint = %fp, "registered on relay — waiting for calls");
|
||||||
|
}
|
||||||
|
Some(SignalMessage::RegisterPresenceAck { success: false, error }) => {
|
||||||
|
anyhow::bail!("registration failed: {}", error.unwrap_or_default());
|
||||||
|
}
|
||||||
|
other => {
|
||||||
|
anyhow::bail!("unexpected response: {other:?}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If --call specified, place the call
|
||||||
|
if let Some(ref target) = call_target {
|
||||||
|
info!(target = %target, "placing direct call...");
|
||||||
|
let call_id = format!("{:016x}", std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH).unwrap().as_nanos());
|
||||||
|
|
||||||
|
transport.send_signal(&SignalMessage::DirectCallOffer {
|
||||||
|
caller_fingerprint: fp.clone(),
|
||||||
|
caller_alias: None,
|
||||||
|
target_fingerprint: target.clone(),
|
||||||
|
call_id: call_id.clone(),
|
||||||
|
identity_pub,
|
||||||
|
ephemeral_pub: [0u8; 32], // Phase 1: not used for key exchange
|
||||||
|
signature: vec![],
|
||||||
|
supported_profiles: vec![wzp_proto::QualityProfile::GOOD],
|
||||||
|
}).await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Signal recv loop — handle incoming signals
|
||||||
|
let signal_transport = transport.clone();
|
||||||
|
let relay = relay_addr;
|
||||||
|
let my_fp = fp.clone();
|
||||||
|
let my_seed = seed.0;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
match signal_transport.recv_signal().await {
|
||||||
|
Ok(Some(msg)) => match msg {
|
||||||
|
SignalMessage::CallRinging { call_id } => {
|
||||||
|
info!(call_id = %call_id, "ringing...");
|
||||||
|
}
|
||||||
|
SignalMessage::DirectCallOffer { caller_fingerprint, caller_alias, call_id, .. } => {
|
||||||
|
info!(
|
||||||
|
from = %caller_fingerprint,
|
||||||
|
alias = ?caller_alias,
|
||||||
|
call_id = %call_id,
|
||||||
|
"incoming call — auto-accepting (generic)"
|
||||||
|
);
|
||||||
|
// Auto-accept for CLI testing
|
||||||
|
let _ = signal_transport.send_signal(&SignalMessage::DirectCallAnswer {
|
||||||
|
call_id,
|
||||||
|
accept_mode: wzp_proto::CallAcceptMode::AcceptGeneric,
|
||||||
|
identity_pub: Some(identity_pub),
|
||||||
|
ephemeral_pub: None,
|
||||||
|
signature: None,
|
||||||
|
chosen_profile: Some(wzp_proto::QualityProfile::GOOD),
|
||||||
|
}).await;
|
||||||
|
}
|
||||||
|
SignalMessage::DirectCallAnswer { call_id, accept_mode, .. } => {
|
||||||
|
info!(call_id = %call_id, mode = ?accept_mode, "call answered");
|
||||||
|
}
|
||||||
|
SignalMessage::CallSetup { call_id, room, relay_addr: setup_relay } => {
|
||||||
|
info!(call_id = %call_id, room = %room, relay = %setup_relay, "call setup — connecting to media room");
|
||||||
|
|
||||||
|
// Connect to the media room
|
||||||
|
let media_relay: SocketAddr = setup_relay.parse().unwrap_or(relay);
|
||||||
|
let media_cfg = wzp_transport::client_config();
|
||||||
|
match wzp_transport::connect(&endpoint, media_relay, &room, media_cfg).await {
|
||||||
|
Ok(media_conn) => {
|
||||||
|
let media_transport = Arc::new(wzp_transport::QuinnTransport::new(media_conn));
|
||||||
|
|
||||||
|
// Crypto handshake
|
||||||
|
match wzp_client::handshake::perform_handshake(&*media_transport, &my_seed, None).await {
|
||||||
|
Ok(_session) => {
|
||||||
|
info!("media connected — sending tone (press Ctrl+C to hang up)");
|
||||||
|
|
||||||
|
// Simple tone sender for testing
|
||||||
|
let mt = media_transport.clone();
|
||||||
|
let send_task = tokio::spawn(async move {
|
||||||
|
let config = wzp_client::call::CallConfig::default();
|
||||||
|
let mut encoder = wzp_client::call::CallEncoder::new(&config);
|
||||||
|
let duration = tokio::time::Duration::from_millis(20);
|
||||||
|
loop {
|
||||||
|
let pcm: Vec<i16> = (0..FRAME_SAMPLES)
|
||||||
|
.map(|_| 0i16) // silence — could be tone
|
||||||
|
.collect();
|
||||||
|
if let Ok(pkts) = encoder.encode_frame(&pcm) {
|
||||||
|
for pkt in &pkts {
|
||||||
|
if mt.send_media(pkt).await.is_err() { return; }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tokio::time::sleep(duration).await;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Wait for hangup or ctrl+c
|
||||||
|
loop {
|
||||||
|
tokio::select! {
|
||||||
|
sig = signal_transport.recv_signal() => {
|
||||||
|
match sig {
|
||||||
|
Ok(Some(SignalMessage::Hangup { .. })) => {
|
||||||
|
info!("remote hung up");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
Ok(None) | Err(_) => break,
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ = tokio::signal::ctrl_c() => {
|
||||||
|
info!("hanging up...");
|
||||||
|
let _ = signal_transport.send_signal(&SignalMessage::Hangup {
|
||||||
|
reason: wzp_proto::HangupReason::Normal,
|
||||||
|
}).await;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
send_task.abort();
|
||||||
|
media_transport.close().await.ok();
|
||||||
|
info!("call ended");
|
||||||
|
}
|
||||||
|
Err(e) => error!("media handshake failed: {e}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => error!("media connect failed: {e}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
SignalMessage::Hangup { reason } => {
|
||||||
|
info!(reason = ?reason, "call ended by remote");
|
||||||
|
}
|
||||||
|
SignalMessage::Pong { .. } => {}
|
||||||
|
other => {
|
||||||
|
info!("signal: {:?}", std::mem::discriminant(&other));
|
||||||
|
}
|
||||||
|
},
|
||||||
|
Ok(None) => {
|
||||||
|
info!("signal connection closed");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("signal error: {e}");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
transport.close().await.ok();
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|||||||
@@ -110,6 +110,15 @@ pub fn signal_to_call_type(signal: &SignalMessage) -> CallSignalType {
|
|||||||
SignalMessage::SessionForward { .. } => CallSignalType::Offer, // reuse
|
SignalMessage::SessionForward { .. } => CallSignalType::Offer, // reuse
|
||||||
SignalMessage::SessionForwardAck { .. } => CallSignalType::Offer, // reuse
|
SignalMessage::SessionForwardAck { .. } => CallSignalType::Offer, // reuse
|
||||||
SignalMessage::RoomUpdate { .. } => CallSignalType::Offer, // reuse
|
SignalMessage::RoomUpdate { .. } => CallSignalType::Offer, // reuse
|
||||||
|
SignalMessage::FederationHello { .. }
|
||||||
|
| SignalMessage::GlobalRoomActive { .. }
|
||||||
|
| SignalMessage::GlobalRoomInactive { .. } => CallSignalType::Offer, // relay-only
|
||||||
|
SignalMessage::DirectCallOffer { .. } => CallSignalType::Offer,
|
||||||
|
SignalMessage::DirectCallAnswer { .. } => CallSignalType::Answer,
|
||||||
|
SignalMessage::CallSetup { .. } => CallSignalType::Offer, // relay-only
|
||||||
|
SignalMessage::CallRinging { .. } => CallSignalType::Ringing,
|
||||||
|
SignalMessage::RegisterPresence { .. }
|
||||||
|
| SignalMessage::RegisterPresenceAck { .. } => CallSignalType::Offer, // relay-only
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -38,6 +38,9 @@ pub async fn perform_handshake(
|
|||||||
ephemeral_pub,
|
ephemeral_pub,
|
||||||
signature,
|
signature,
|
||||||
supported_profiles: vec![
|
supported_profiles: vec![
|
||||||
|
QualityProfile::STUDIO_64K,
|
||||||
|
QualityProfile::STUDIO_48K,
|
||||||
|
QualityProfile::STUDIO_32K,
|
||||||
QualityProfile::GOOD,
|
QualityProfile::GOOD,
|
||||||
QualityProfile::DEGRADED,
|
QualityProfile::DEGRADED,
|
||||||
QualityProfile::CATASTROPHIC,
|
QualityProfile::CATASTROPHIC,
|
||||||
|
|||||||
@@ -79,7 +79,7 @@ impl AudioDecoder for OpusDecoder {
|
|||||||
|
|
||||||
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
|
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
|
||||||
match profile.codec {
|
match profile.codec {
|
||||||
CodecId::Opus24k | CodecId::Opus16k | CodecId::Opus6k => {
|
c if c.is_opus() => {
|
||||||
self.codec_id = profile.codec;
|
self.codec_id = profile.codec;
|
||||||
self.frame_duration_ms = profile.frame_duration_ms;
|
self.frame_duration_ms = profile.frame_duration_ms;
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|||||||
@@ -100,7 +100,7 @@ impl AudioEncoder for OpusEncoder {
|
|||||||
|
|
||||||
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
|
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
|
||||||
match profile.codec {
|
match profile.codec {
|
||||||
CodecId::Opus24k | CodecId::Opus16k | CodecId::Opus6k => {
|
c if c.is_opus() => {
|
||||||
self.codec_id = profile.codec;
|
self.codec_id = profile.codec;
|
||||||
self.frame_duration_ms = profile.frame_duration_ms;
|
self.frame_duration_ms = profile.frame_duration_ms;
|
||||||
self.apply_bitrate(profile.codec)?;
|
self.apply_bitrate(profile.codec)?;
|
||||||
|
|||||||
@@ -110,7 +110,18 @@ impl KeyExchange for WarzoneKeyExchange {
|
|||||||
hk.expand(b"warzone-session-key", &mut session_key)
|
hk.expand(b"warzone-session-key", &mut session_key)
|
||||||
.expect("HKDF expand for session key should not fail");
|
.expect("HKDF expand for session key should not fail");
|
||||||
|
|
||||||
Ok(Box::new(ChaChaSession::new(session_key)))
|
// Derive SAS (Short Authentication String) from shared secret only.
|
||||||
|
// The shared secret is identical on both sides (X25519 DH property).
|
||||||
|
// A MITM would produce a different shared secret → different SAS.
|
||||||
|
// We use a dedicated HKDF label so SAS is independent of the session key.
|
||||||
|
let mut sas_key = [0u8; 4];
|
||||||
|
hk.expand(b"warzone-sas-code", &mut sas_key)
|
||||||
|
.expect("HKDF expand for SAS should not fail");
|
||||||
|
let sas_code = u32::from_be_bytes(sas_key) % 10000;
|
||||||
|
|
||||||
|
let mut session = ChaChaSession::new(session_key);
|
||||||
|
session.set_sas(sas_code);
|
||||||
|
Ok(Box::new(session))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -211,4 +222,47 @@ mod tests {
|
|||||||
|
|
||||||
assert_eq!(&decrypted, plaintext);
|
assert_eq!(&decrypted, plaintext);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn sas_codes_match_between_peers() {
|
||||||
|
let mut alice = WarzoneKeyExchange::from_identity_seed(&[0xAA; 32]);
|
||||||
|
let mut bob = WarzoneKeyExchange::from_identity_seed(&[0xBB; 32]);
|
||||||
|
|
||||||
|
let alice_eph_pub = alice.generate_ephemeral();
|
||||||
|
let bob_eph_pub = bob.generate_ephemeral();
|
||||||
|
|
||||||
|
let alice_session = alice.derive_session(&bob_eph_pub).unwrap();
|
||||||
|
let bob_session = bob.derive_session(&alice_eph_pub).unwrap();
|
||||||
|
|
||||||
|
let alice_sas = alice_session.sas_code();
|
||||||
|
let bob_sas = bob_session.sas_code();
|
||||||
|
|
||||||
|
assert!(alice_sas.is_some(), "Alice should have SAS");
|
||||||
|
assert!(bob_sas.is_some(), "Bob should have SAS");
|
||||||
|
assert_eq!(alice_sas, bob_sas, "SAS codes must match between peers");
|
||||||
|
assert!(alice_sas.unwrap() < 10000, "SAS should be 4 digits");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn sas_differs_for_different_peers() {
|
||||||
|
let mut alice = WarzoneKeyExchange::from_identity_seed(&[0xAA; 32]);
|
||||||
|
let mut bob = WarzoneKeyExchange::from_identity_seed(&[0xBB; 32]);
|
||||||
|
let mut eve = WarzoneKeyExchange::from_identity_seed(&[0xEE; 32]);
|
||||||
|
|
||||||
|
let alice_eph = alice.generate_ephemeral();
|
||||||
|
let bob_eph = bob.generate_ephemeral();
|
||||||
|
let eve_eph = eve.generate_ephemeral();
|
||||||
|
|
||||||
|
let alice_bob_session = alice.derive_session(&bob_eph).unwrap();
|
||||||
|
|
||||||
|
// Eve does separate handshake with Bob (MITM scenario)
|
||||||
|
let eve_bob_session = eve.derive_session(&bob_eph).unwrap();
|
||||||
|
|
||||||
|
// SAS codes should differ — Eve's session has different shared secret
|
||||||
|
assert_ne!(
|
||||||
|
alice_bob_session.sas_code(),
|
||||||
|
eve_bob_session.sas_code(),
|
||||||
|
"MITM session should produce different SAS"
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -26,6 +26,8 @@ pub struct ChaChaSession {
|
|||||||
rekey_mgr: RekeyManager,
|
rekey_mgr: RekeyManager,
|
||||||
/// Pending ephemeral secret for rekey (stored until peer responds).
|
/// Pending ephemeral secret for rekey (stored until peer responds).
|
||||||
pending_rekey_secret: Option<StaticSecret>,
|
pending_rekey_secret: Option<StaticSecret>,
|
||||||
|
/// Short Authentication String (4-digit code for verbal verification).
|
||||||
|
sas_code: Option<u32>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ChaChaSession {
|
impl ChaChaSession {
|
||||||
@@ -46,9 +48,15 @@ impl ChaChaSession {
|
|||||||
recv_seq: 0,
|
recv_seq: 0,
|
||||||
rekey_mgr: RekeyManager::new(shared_secret),
|
rekey_mgr: RekeyManager::new(shared_secret),
|
||||||
pending_rekey_secret: None,
|
pending_rekey_secret: None,
|
||||||
|
sas_code: None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Set the SAS code (called by key exchange after derivation).
|
||||||
|
pub fn set_sas(&mut self, code: u32) {
|
||||||
|
self.sas_code = Some(code);
|
||||||
|
}
|
||||||
|
|
||||||
/// Install a new key (after rekeying).
|
/// Install a new key (after rekeying).
|
||||||
fn install_key(&mut self, new_key: [u8; 32]) {
|
fn install_key(&mut self, new_key: [u8; 32]) {
|
||||||
use sha2::Digest;
|
use sha2::Digest;
|
||||||
@@ -136,6 +144,10 @@ impl CryptoSession for ChaChaSession {
|
|||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn sas_code(&self) -> Option<u32> {
|
||||||
|
self.sas_code
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
//! RaptorQ FEC decoder — reassembles source blocks from received source and repair symbols.
|
//! RaptorQ FEC decoder — reassembles source blocks from received source and repair symbols.
|
||||||
|
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
|
use std::time::Instant;
|
||||||
|
|
||||||
use raptorq::{EncodingPacket, ObjectTransmissionInformation, PayloadId, SourceBlockDecoder};
|
use raptorq::{EncodingPacket, ObjectTransmissionInformation, PayloadId, SourceBlockDecoder};
|
||||||
use wzp_proto::error::FecError;
|
use wzp_proto::error::FecError;
|
||||||
@@ -9,6 +10,9 @@ use wzp_proto::FecDecoder;
|
|||||||
/// Length prefix size (u16 little-endian), must match encoder.
|
/// Length prefix size (u16 little-endian), must match encoder.
|
||||||
const LEN_PREFIX: usize = 2;
|
const LEN_PREFIX: usize = 2;
|
||||||
|
|
||||||
|
/// Decoded blocks older than this are eligible for reuse by a new sender.
|
||||||
|
const BLOCK_STALE_SECS: u64 = 2;
|
||||||
|
|
||||||
/// State for one in-flight block being decoded.
|
/// State for one in-flight block being decoded.
|
||||||
struct BlockState {
|
struct BlockState {
|
||||||
/// Number of source symbols expected.
|
/// Number of source symbols expected.
|
||||||
@@ -21,6 +25,8 @@ struct BlockState {
|
|||||||
decoded: bool,
|
decoded: bool,
|
||||||
/// Cached decoded result.
|
/// Cached decoded result.
|
||||||
result: Option<Vec<Vec<u8>>>,
|
result: Option<Vec<Vec<u8>>>,
|
||||||
|
/// When this block was last decoded (for staleness check).
|
||||||
|
decoded_at: Option<Instant>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// RaptorQ-based FEC decoder that handles multiple concurrent blocks.
|
/// RaptorQ-based FEC decoder that handles multiple concurrent blocks.
|
||||||
@@ -58,6 +64,7 @@ impl RaptorQFecDecoder {
|
|||||||
symbol_size: self.symbol_size,
|
symbol_size: self.symbol_size,
|
||||||
decoded: false,
|
decoded: false,
|
||||||
result: None,
|
result: None,
|
||||||
|
decoded_at: None,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -74,8 +81,20 @@ impl FecDecoder for RaptorQFecDecoder {
|
|||||||
let block = self.get_or_create_block(block_id);
|
let block = self.get_or_create_block(block_id);
|
||||||
|
|
||||||
if block.decoded {
|
if block.decoded {
|
||||||
// Already decoded, ignore additional symbols.
|
// If the block was decoded recently, skip (normal duplicate).
|
||||||
return Ok(());
|
// If it's stale (>2s), a new sender is reusing this block_id — reset it.
|
||||||
|
if let Some(at) = block.decoded_at {
|
||||||
|
if at.elapsed().as_secs() >= BLOCK_STALE_SECS {
|
||||||
|
block.decoded = false;
|
||||||
|
block.result = None;
|
||||||
|
block.decoded_at = None;
|
||||||
|
block.packets.clear();
|
||||||
|
} else {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Data should already be at symbol_size (length-prefixed and padded by the encoder).
|
// Data should already be at symbol_size (length-prefixed and padded by the encoder).
|
||||||
@@ -132,6 +151,7 @@ impl FecDecoder for RaptorQFecDecoder {
|
|||||||
|
|
||||||
let block = self.blocks.get_mut(&block_id).unwrap();
|
let block = self.blocks.get_mut(&block_id).unwrap();
|
||||||
block.decoded = true;
|
block.decoded = true;
|
||||||
|
block.decoded_at = Some(Instant::now());
|
||||||
block.result = Some(frames.clone());
|
block.result = Some(frames.clone());
|
||||||
Ok(Some(frames))
|
Ok(Some(frames))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,6 +18,12 @@ pub enum CodecId {
|
|||||||
Codec2_1200 = 4,
|
Codec2_1200 = 4,
|
||||||
/// Comfort noise descriptor (silence suppression)
|
/// Comfort noise descriptor (silence suppression)
|
||||||
ComfortNoise = 5,
|
ComfortNoise = 5,
|
||||||
|
/// Opus at 32kbps (studio low)
|
||||||
|
Opus32k = 6,
|
||||||
|
/// Opus at 48kbps (studio)
|
||||||
|
Opus48k = 7,
|
||||||
|
/// Opus at 64kbps (studio high)
|
||||||
|
Opus64k = 8,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl CodecId {
|
impl CodecId {
|
||||||
@@ -27,6 +33,9 @@ impl CodecId {
|
|||||||
Self::Opus24k => 24_000,
|
Self::Opus24k => 24_000,
|
||||||
Self::Opus16k => 16_000,
|
Self::Opus16k => 16_000,
|
||||||
Self::Opus6k => 6_000,
|
Self::Opus6k => 6_000,
|
||||||
|
Self::Opus32k => 32_000,
|
||||||
|
Self::Opus48k => 48_000,
|
||||||
|
Self::Opus64k => 64_000,
|
||||||
Self::Codec2_3200 => 3_200,
|
Self::Codec2_3200 => 3_200,
|
||||||
Self::Codec2_1200 => 1_200,
|
Self::Codec2_1200 => 1_200,
|
||||||
Self::ComfortNoise => 0,
|
Self::ComfortNoise => 0,
|
||||||
@@ -36,8 +45,7 @@ impl CodecId {
|
|||||||
/// Preferred frame duration in milliseconds.
|
/// Preferred frame duration in milliseconds.
|
||||||
pub const fn frame_duration_ms(self) -> u8 {
|
pub const fn frame_duration_ms(self) -> u8 {
|
||||||
match self {
|
match self {
|
||||||
Self::Opus24k => 20,
|
Self::Opus24k | Self::Opus16k | Self::Opus32k | Self::Opus48k | Self::Opus64k => 20,
|
||||||
Self::Opus16k => 20,
|
|
||||||
Self::Opus6k => 40,
|
Self::Opus6k => 40,
|
||||||
Self::Codec2_3200 => 20,
|
Self::Codec2_3200 => 20,
|
||||||
Self::Codec2_1200 => 40,
|
Self::Codec2_1200 => 40,
|
||||||
@@ -48,7 +56,8 @@ impl CodecId {
|
|||||||
/// Sample rate expected by this codec.
|
/// Sample rate expected by this codec.
|
||||||
pub const fn sample_rate_hz(self) -> u32 {
|
pub const fn sample_rate_hz(self) -> u32 {
|
||||||
match self {
|
match self {
|
||||||
Self::Opus24k | Self::Opus16k | Self::Opus6k => 48_000,
|
Self::Opus24k | Self::Opus16k | Self::Opus6k
|
||||||
|
| Self::Opus32k | Self::Opus48k | Self::Opus64k => 48_000,
|
||||||
Self::Codec2_3200 | Self::Codec2_1200 => 8_000,
|
Self::Codec2_3200 | Self::Codec2_1200 => 8_000,
|
||||||
Self::ComfortNoise => 48_000,
|
Self::ComfortNoise => 48_000,
|
||||||
}
|
}
|
||||||
@@ -63,6 +72,9 @@ impl CodecId {
|
|||||||
3 => Some(Self::Codec2_3200),
|
3 => Some(Self::Codec2_3200),
|
||||||
4 => Some(Self::Codec2_1200),
|
4 => Some(Self::Codec2_1200),
|
||||||
5 => Some(Self::ComfortNoise),
|
5 => Some(Self::ComfortNoise),
|
||||||
|
6 => Some(Self::Opus32k),
|
||||||
|
7 => Some(Self::Opus48k),
|
||||||
|
8 => Some(Self::Opus64k),
|
||||||
_ => None,
|
_ => None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -71,6 +83,12 @@ impl CodecId {
|
|||||||
pub const fn to_wire(self) -> u8 {
|
pub const fn to_wire(self) -> u8 {
|
||||||
self as u8
|
self as u8
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns true if this is an Opus variant.
|
||||||
|
pub const fn is_opus(self) -> bool {
|
||||||
|
matches!(self, Self::Opus6k | Self::Opus16k | Self::Opus24k
|
||||||
|
| Self::Opus32k | Self::Opus48k | Self::Opus64k)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Describes the complete quality configuration for a call session.
|
/// Describes the complete quality configuration for a call session.
|
||||||
@@ -111,6 +129,30 @@ impl QualityProfile {
|
|||||||
frames_per_block: 8,
|
frames_per_block: 8,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/// Studio low: Opus 32kbps, minimal FEC.
|
||||||
|
pub const STUDIO_32K: Self = Self {
|
||||||
|
codec: CodecId::Opus32k,
|
||||||
|
fec_ratio: 0.1,
|
||||||
|
frame_duration_ms: 20,
|
||||||
|
frames_per_block: 5,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// Studio: Opus 48kbps, minimal FEC.
|
||||||
|
pub const STUDIO_48K: Self = Self {
|
||||||
|
codec: CodecId::Opus48k,
|
||||||
|
fec_ratio: 0.1,
|
||||||
|
frame_duration_ms: 20,
|
||||||
|
frames_per_block: 5,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// Studio high: Opus 64kbps, minimal FEC.
|
||||||
|
pub const STUDIO_64K: Self = Self {
|
||||||
|
codec: CodecId::Opus64k,
|
||||||
|
fec_ratio: 0.1,
|
||||||
|
frame_duration_ms: 20,
|
||||||
|
frames_per_block: 5,
|
||||||
|
};
|
||||||
|
|
||||||
/// Estimated total bandwidth in kbps including FEC overhead.
|
/// Estimated total bandwidth in kbps including FEC overhead.
|
||||||
pub fn total_bitrate_kbps(&self) -> f32 {
|
pub fn total_bitrate_kbps(&self) -> f32 {
|
||||||
let base = self.codec.bitrate_bps() as f32 / 1000.0;
|
let base = self.codec.bitrate_bps() as f32 / 1000.0;
|
||||||
|
|||||||
@@ -273,10 +273,21 @@ impl JitterBuffer {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if packet is too old (already played out)
|
// Check if packet is too old (already played out).
|
||||||
|
// A backward jump of >100 seq (~2s at 50fps) indicates a new sender in a
|
||||||
|
// federation room — reset instead of dropping.
|
||||||
if self.stats.packets_played > 0 && seq_before(seq, self.next_playout_seq) {
|
if self.stats.packets_played > 0 && seq_before(seq, self.next_playout_seq) {
|
||||||
self.stats.packets_late += 1;
|
let backward_distance = self.next_playout_seq.wrapping_sub(seq);
|
||||||
return;
|
tracing::warn!(seq, next = self.next_playout_seq, backward_distance, "jitter: backward seq detected");
|
||||||
|
if backward_distance > 100 {
|
||||||
|
tracing::info!(seq, next = self.next_playout_seq, "jitter: RESET — new sender detected");
|
||||||
|
self.buffer.clear();
|
||||||
|
self.next_playout_seq = seq;
|
||||||
|
self.stats.packets_late = 0;
|
||||||
|
} else {
|
||||||
|
self.stats.packets_late += 1;
|
||||||
|
return;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// If we haven't started playout yet, adjust next_playout_seq to earliest known
|
// If we haven't started playout yet, adjust next_playout_seq to earliest known
|
||||||
@@ -412,10 +423,21 @@ impl JitterBuffer {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if packet is too old (already played out)
|
// Check if packet is too old (already played out).
|
||||||
|
// A backward jump of >100 seq (~2s at 50fps) indicates a new sender in a
|
||||||
|
// federation room — reset instead of dropping.
|
||||||
if self.stats.packets_played > 0 && seq_before(seq, self.next_playout_seq) {
|
if self.stats.packets_played > 0 && seq_before(seq, self.next_playout_seq) {
|
||||||
self.stats.packets_late += 1;
|
let backward_distance = self.next_playout_seq.wrapping_sub(seq);
|
||||||
return;
|
tracing::warn!(seq, next = self.next_playout_seq, backward_distance, "jitter: backward seq detected");
|
||||||
|
if backward_distance > 100 {
|
||||||
|
tracing::info!(seq, next = self.next_playout_seq, "jitter: RESET — new sender detected");
|
||||||
|
self.buffer.clear();
|
||||||
|
self.next_playout_seq = seq;
|
||||||
|
self.stats.packets_late = 0;
|
||||||
|
} else {
|
||||||
|
self.stats.packets_late += 1;
|
||||||
|
return;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// If we haven't started playout yet, adjust next_playout_seq to earliest known
|
// If we haven't started playout yet, adjust next_playout_seq to earliest known
|
||||||
|
|||||||
@@ -25,8 +25,9 @@ pub mod traits;
|
|||||||
pub use codec_id::{CodecId, QualityProfile};
|
pub use codec_id::{CodecId, QualityProfile};
|
||||||
pub use error::*;
|
pub use error::*;
|
||||||
pub use packet::{
|
pub use packet::{
|
||||||
HangupReason, MediaHeader, MediaPacket, MiniFrameContext, MiniHeader, QualityReport,
|
CallAcceptMode, HangupReason, MediaHeader, MediaPacket, MiniFrameContext, MiniHeader,
|
||||||
RoomParticipant, SignalMessage, TrunkEntry, TrunkFrame, FRAME_TYPE_FULL, FRAME_TYPE_MINI,
|
QualityReport, RoomParticipant, SignalMessage, TrunkEntry, TrunkFrame, FRAME_TYPE_FULL,
|
||||||
|
FRAME_TYPE_MINI,
|
||||||
};
|
};
|
||||||
pub use bandwidth::{BandwidthEstimator, CongestionState};
|
pub use bandwidth::{BandwidthEstimator, CongestionState};
|
||||||
pub use quality::{AdaptiveQualityController, NetworkContext, Tier};
|
pub use quality::{AdaptiveQualityController, NetworkContext, Tier};
|
||||||
|
|||||||
@@ -656,6 +656,112 @@ pub enum SignalMessage {
|
|||||||
/// List of participants currently in the room.
|
/// List of participants currently in the room.
|
||||||
participants: Vec<RoomParticipant>,
|
participants: Vec<RoomParticipant>,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
// ── Federation signals (relay-to-relay) ──
|
||||||
|
|
||||||
|
/// Federation: initial handshake — the connecting relay identifies itself.
|
||||||
|
FederationHello {
|
||||||
|
/// TLS certificate fingerprint of the connecting relay.
|
||||||
|
tls_fingerprint: String,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Federation: this relay now has local participants in a global room.
|
||||||
|
GlobalRoomActive {
|
||||||
|
room: String,
|
||||||
|
/// Participants on the announcing relay (for federated presence).
|
||||||
|
#[serde(default)]
|
||||||
|
participants: Vec<RoomParticipant>,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Federation: this relay's last local participant left a global room.
|
||||||
|
GlobalRoomInactive {
|
||||||
|
room: String,
|
||||||
|
},
|
||||||
|
|
||||||
|
// ── Direct calling signals (client ↔ relay signaling) ──
|
||||||
|
|
||||||
|
/// Register on relay for direct calls. Sent on `_signal` connections
|
||||||
|
/// after optional AuthToken.
|
||||||
|
RegisterPresence {
|
||||||
|
/// Client's Ed25519 identity public key.
|
||||||
|
identity_pub: [u8; 32],
|
||||||
|
/// Signature over ("register-presence" || identity_pub).
|
||||||
|
signature: Vec<u8>,
|
||||||
|
/// Optional display name.
|
||||||
|
alias: Option<String>,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Relay confirms presence registration.
|
||||||
|
RegisterPresenceAck {
|
||||||
|
success: bool,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
error: Option<String>,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Direct call offer routed through the relay to a specific peer.
|
||||||
|
DirectCallOffer {
|
||||||
|
/// Caller's fingerprint.
|
||||||
|
caller_fingerprint: String,
|
||||||
|
/// Caller's display name.
|
||||||
|
caller_alias: Option<String>,
|
||||||
|
/// Target's fingerprint.
|
||||||
|
target_fingerprint: String,
|
||||||
|
/// Unique call session ID (UUID).
|
||||||
|
call_id: String,
|
||||||
|
/// Caller's Ed25519 identity pub.
|
||||||
|
identity_pub: [u8; 32],
|
||||||
|
/// Caller's ephemeral X25519 pub (for key exchange on media connect).
|
||||||
|
ephemeral_pub: [u8; 32],
|
||||||
|
/// Signature over (ephemeral_pub || target_fingerprint || call_id).
|
||||||
|
signature: Vec<u8>,
|
||||||
|
/// Supported quality profiles.
|
||||||
|
supported_profiles: Vec<crate::QualityProfile>,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Callee's response to a direct call.
|
||||||
|
DirectCallAnswer {
|
||||||
|
call_id: String,
|
||||||
|
/// How the callee accepts (or rejects).
|
||||||
|
accept_mode: CallAcceptMode,
|
||||||
|
/// Callee's identity pub (present when accepting).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
identity_pub: Option<[u8; 32]>,
|
||||||
|
/// Callee's ephemeral pub (present when accepting).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
ephemeral_pub: Option<[u8; 32]>,
|
||||||
|
/// Signature (present when accepting).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
signature: Option<Vec<u8>>,
|
||||||
|
/// Chosen quality profile (present when accepting).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
chosen_profile: Option<crate::QualityProfile>,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Relay tells both parties: media room is ready.
|
||||||
|
CallSetup {
|
||||||
|
call_id: String,
|
||||||
|
/// Room name on the relay for the media session (e.g., "_call:a1b2c3d4").
|
||||||
|
room: String,
|
||||||
|
/// Relay address for the QUIC media connection.
|
||||||
|
relay_addr: String,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Ringing notification (relay → caller, callee received the offer).
|
||||||
|
CallRinging {
|
||||||
|
call_id: String,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
/// How the callee responds to a direct call.
|
||||||
|
#[derive(Clone, Copy, Debug, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
pub enum CallAcceptMode {
|
||||||
|
/// Reject the call.
|
||||||
|
Reject,
|
||||||
|
/// Accept with trust — in Phase 2, this enables P2P (reveals IP).
|
||||||
|
/// In Phase 1, behaves the same as AcceptGeneric.
|
||||||
|
AcceptTrusted,
|
||||||
|
/// Accept with privacy — relay always mediates media.
|
||||||
|
AcceptGeneric,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A participant entry in a RoomUpdate message.
|
/// A participant entry in a RoomUpdate message.
|
||||||
@@ -665,6 +771,10 @@ pub struct RoomParticipant {
|
|||||||
pub fingerprint: String,
|
pub fingerprint: String,
|
||||||
/// Optional display name set by the client.
|
/// Optional display name set by the client.
|
||||||
pub alias: Option<String>,
|
pub alias: Option<String>,
|
||||||
|
/// Relay label — identifies which relay this participant is connected to.
|
||||||
|
/// None for local participants, Some("Relay B") for federated.
|
||||||
|
#[serde(default)]
|
||||||
|
pub relay_label: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Reasons for ending a call.
|
/// Reasons for ending a call.
|
||||||
|
|||||||
@@ -132,6 +132,14 @@ pub trait CryptoSession: Send + Sync {
|
|||||||
fn overhead(&self) -> usize {
|
fn overhead(&self) -> usize {
|
||||||
16 // ChaCha20-Poly1305 tag
|
16 // ChaCha20-Poly1305 tag
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Short Authentication String (SAS) — 4-digit code for verbal verification.
|
||||||
|
/// Both peers derive the same code from the shared secret + identity keys.
|
||||||
|
/// If a MITM relay is intercepting, the codes will differ.
|
||||||
|
/// Returns None if SAS was not computed (e.g., relay-side sessions).
|
||||||
|
fn sas_code(&self) -> Option<u32> {
|
||||||
|
None
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Key exchange using the Warzone identity model.
|
/// Key exchange using the Warzone identity model.
|
||||||
|
|||||||
@@ -28,6 +28,9 @@ prometheus = "0.13"
|
|||||||
axum = { version = "0.7", default-features = false, features = ["tokio", "http1", "ws"] }
|
axum = { version = "0.7", default-features = false, features = ["tokio", "http1", "ws"] }
|
||||||
tower-http = { version = "0.6", features = ["fs"] }
|
tower-http = { version = "0.6", features = ["fs"] }
|
||||||
futures-util = "0.3"
|
futures-util = "0.3"
|
||||||
|
dirs = "6"
|
||||||
|
sha2 = { workspace = true }
|
||||||
|
chrono = "0.4"
|
||||||
|
|
||||||
[[bin]]
|
[[bin]]
|
||||||
name = "wzp-relay"
|
name = "wzp-relay"
|
||||||
|
|||||||
18
crates/wzp-relay/build.rs
Normal file
18
crates/wzp-relay/build.rs
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
use std::process::Command;
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
// Get git hash at build time
|
||||||
|
let output = Command::new("git")
|
||||||
|
.args(["rev-parse", "--short", "HEAD"])
|
||||||
|
.output();
|
||||||
|
|
||||||
|
let hash = match output {
|
||||||
|
Ok(o) if o.status.success() => {
|
||||||
|
String::from_utf8_lossy(&o.stdout).trim().to_string()
|
||||||
|
}
|
||||||
|
_ => "unknown".to_string(),
|
||||||
|
};
|
||||||
|
|
||||||
|
println!("cargo:rustc-env=WZP_BUILD_HASH={hash}");
|
||||||
|
println!("cargo:rerun-if-changed=.git/HEAD");
|
||||||
|
}
|
||||||
199
crates/wzp-relay/src/call_registry.rs
Normal file
199
crates/wzp-relay/src/call_registry.rs
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
//! Direct call state tracking.
|
||||||
|
//!
|
||||||
|
//! Manages the lifecycle of 1:1 direct calls placed via the `_signal` channel.
|
||||||
|
//! Each call goes through: Pending → Ringing → Active → Ended.
|
||||||
|
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
|
/// State of a direct call.
|
||||||
|
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||||
|
pub enum DirectCallState {
|
||||||
|
/// Offer sent to callee, waiting for response.
|
||||||
|
Pending,
|
||||||
|
/// Callee acknowledged, ringing.
|
||||||
|
Ringing,
|
||||||
|
/// Call accepted, media room active.
|
||||||
|
Active,
|
||||||
|
/// Call ended (hangup, reject, timeout, or error).
|
||||||
|
Ended,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A tracked direct call between two users.
|
||||||
|
pub struct DirectCall {
|
||||||
|
pub call_id: String,
|
||||||
|
pub caller_fingerprint: String,
|
||||||
|
pub callee_fingerprint: String,
|
||||||
|
pub state: DirectCallState,
|
||||||
|
pub accept_mode: Option<wzp_proto::CallAcceptMode>,
|
||||||
|
/// Private room name (set when accepted).
|
||||||
|
pub room_name: Option<String>,
|
||||||
|
pub created_at: Instant,
|
||||||
|
pub answered_at: Option<Instant>,
|
||||||
|
pub ended_at: Option<Instant>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Registry of active direct calls.
|
||||||
|
pub struct CallRegistry {
|
||||||
|
calls: HashMap<String, DirectCall>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CallRegistry {
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self {
|
||||||
|
calls: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a new pending call. Returns the call_id.
|
||||||
|
pub fn create_call(&mut self, call_id: String, caller_fp: String, callee_fp: String) -> &DirectCall {
|
||||||
|
let call = DirectCall {
|
||||||
|
call_id: call_id.clone(),
|
||||||
|
caller_fingerprint: caller_fp,
|
||||||
|
callee_fingerprint: callee_fp,
|
||||||
|
state: DirectCallState::Pending,
|
||||||
|
accept_mode: None,
|
||||||
|
room_name: None,
|
||||||
|
created_at: Instant::now(),
|
||||||
|
answered_at: None,
|
||||||
|
ended_at: None,
|
||||||
|
};
|
||||||
|
self.calls.insert(call_id.clone(), call);
|
||||||
|
self.calls.get(&call_id).unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a call by ID.
|
||||||
|
pub fn get(&self, call_id: &str) -> Option<&DirectCall> {
|
||||||
|
self.calls.get(call_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a mutable call by ID.
|
||||||
|
pub fn get_mut(&mut self, call_id: &str) -> Option<&mut DirectCall> {
|
||||||
|
self.calls.get_mut(call_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Transition to Ringing state.
|
||||||
|
pub fn set_ringing(&mut self, call_id: &str) -> bool {
|
||||||
|
if let Some(call) = self.calls.get_mut(call_id) {
|
||||||
|
if call.state == DirectCallState::Pending {
|
||||||
|
call.state = DirectCallState::Ringing;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
false
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Transition to Active state.
|
||||||
|
pub fn set_active(&mut self, call_id: &str, mode: wzp_proto::CallAcceptMode, room: String) -> bool {
|
||||||
|
if let Some(call) = self.calls.get_mut(call_id) {
|
||||||
|
if call.state == DirectCallState::Pending || call.state == DirectCallState::Ringing {
|
||||||
|
call.state = DirectCallState::Active;
|
||||||
|
call.accept_mode = Some(mode);
|
||||||
|
call.room_name = Some(room);
|
||||||
|
call.answered_at = Some(Instant::now());
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
false
|
||||||
|
}
|
||||||
|
|
||||||
|
/// End a call.
|
||||||
|
pub fn end_call(&mut self, call_id: &str) -> Option<DirectCall> {
|
||||||
|
if let Some(call) = self.calls.get_mut(call_id) {
|
||||||
|
call.state = DirectCallState::Ended;
|
||||||
|
call.ended_at = Some(Instant::now());
|
||||||
|
}
|
||||||
|
self.calls.remove(call_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find active/pending calls involving a fingerprint.
|
||||||
|
pub fn calls_for_fingerprint(&self, fp: &str) -> Vec<&DirectCall> {
|
||||||
|
self.calls.values()
|
||||||
|
.filter(|c| {
|
||||||
|
c.state != DirectCallState::Ended
|
||||||
|
&& (c.caller_fingerprint == fp || c.callee_fingerprint == fp)
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find the peer's fingerprint in a call.
|
||||||
|
pub fn peer_fingerprint(&self, call_id: &str, my_fp: &str) -> Option<&str> {
|
||||||
|
self.calls.get(call_id).map(|c| {
|
||||||
|
if c.caller_fingerprint == my_fp {
|
||||||
|
c.callee_fingerprint.as_str()
|
||||||
|
} else {
|
||||||
|
c.caller_fingerprint.as_str()
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove calls that have been pending longer than the timeout.
|
||||||
|
/// Returns call IDs of expired calls.
|
||||||
|
pub fn expire_stale(&mut self, timeout: Duration) -> Vec<DirectCall> {
|
||||||
|
let now = Instant::now();
|
||||||
|
let expired: Vec<String> = self.calls.iter()
|
||||||
|
.filter(|(_, c)| {
|
||||||
|
c.state == DirectCallState::Pending
|
||||||
|
&& now.duration_since(c.created_at) > timeout
|
||||||
|
})
|
||||||
|
.map(|(id, _)| id.clone())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
expired.into_iter()
|
||||||
|
.filter_map(|id| self.calls.remove(&id))
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Number of active (non-ended) calls.
|
||||||
|
pub fn active_count(&self) -> usize {
|
||||||
|
self.calls.values()
|
||||||
|
.filter(|c| c.state != DirectCallState::Ended)
|
||||||
|
.count()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn call_lifecycle() {
|
||||||
|
let mut reg = CallRegistry::new();
|
||||||
|
reg.create_call("c1".into(), "alice".into(), "bob".into());
|
||||||
|
|
||||||
|
assert_eq!(reg.get("c1").unwrap().state, DirectCallState::Pending);
|
||||||
|
assert!(reg.set_ringing("c1"));
|
||||||
|
assert_eq!(reg.get("c1").unwrap().state, DirectCallState::Ringing);
|
||||||
|
|
||||||
|
assert!(reg.set_active("c1", wzp_proto::CallAcceptMode::AcceptGeneric, "_call:c1".into()));
|
||||||
|
assert_eq!(reg.get("c1").unwrap().state, DirectCallState::Active);
|
||||||
|
assert_eq!(reg.get("c1").unwrap().room_name.as_deref(), Some("_call:c1"));
|
||||||
|
|
||||||
|
let ended = reg.end_call("c1").unwrap();
|
||||||
|
assert_eq!(ended.state, DirectCallState::Ended);
|
||||||
|
assert_eq!(reg.active_count(), 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn expire_stale_calls() {
|
||||||
|
let mut reg = CallRegistry::new();
|
||||||
|
reg.create_call("c1".into(), "alice".into(), "bob".into());
|
||||||
|
|
||||||
|
// Not expired yet
|
||||||
|
let expired = reg.expire_stale(Duration::from_secs(30));
|
||||||
|
assert!(expired.is_empty());
|
||||||
|
|
||||||
|
// Force expiry with 0 timeout
|
||||||
|
let expired = reg.expire_stale(Duration::from_secs(0));
|
||||||
|
assert_eq!(expired.len(), 1);
|
||||||
|
assert_eq!(expired[0].call_id, "c1");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn peer_lookup() {
|
||||||
|
let mut reg = CallRegistry::new();
|
||||||
|
reg.create_call("c1".into(), "alice".into(), "bob".into());
|
||||||
|
assert_eq!(reg.peer_fingerprint("c1", "alice"), Some("bob"));
|
||||||
|
assert_eq!(reg.peer_fingerprint("c1", "bob"), Some("alice"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,8 +3,41 @@
|
|||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::net::SocketAddr;
|
use std::net::SocketAddr;
|
||||||
|
|
||||||
/// Configuration for the relay daemon.
|
/// A federated peer relay.
|
||||||
#[derive(Clone, Debug, Serialize, Deserialize)]
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
pub struct PeerConfig {
|
||||||
|
/// Address of the peer relay (e.g., "193.180.213.68:4433").
|
||||||
|
pub url: String,
|
||||||
|
/// Expected TLS certificate fingerprint (hex, with colons).
|
||||||
|
pub fingerprint: String,
|
||||||
|
/// Optional human-readable label.
|
||||||
|
#[serde(default)]
|
||||||
|
pub label: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A trusted relay — accepts inbound federation without needing the peer's address.
|
||||||
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
pub struct TrustedConfig {
|
||||||
|
/// Expected TLS certificate fingerprint (hex, with colons).
|
||||||
|
pub fingerprint: String,
|
||||||
|
/// Optional human-readable label.
|
||||||
|
#[serde(default)]
|
||||||
|
pub label: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A room declared global — bridged across all federated peers.
|
||||||
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
pub struct GlobalRoomConfig {
|
||||||
|
/// Room name to bridge (e.g., "android").
|
||||||
|
pub name: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Configuration for the relay daemon.
|
||||||
|
///
|
||||||
|
/// All fields have defaults, so a minimal TOML file only needs the
|
||||||
|
/// fields you want to override (e.g., just `[[peers]]`).
|
||||||
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
#[serde(default)]
|
||||||
pub struct RelayConfig {
|
pub struct RelayConfig {
|
||||||
/// Address to listen on for incoming connections (client-facing).
|
/// Address to listen on for incoming connections (client-facing).
|
||||||
pub listen_addr: SocketAddr,
|
pub listen_addr: SocketAddr,
|
||||||
@@ -44,6 +77,22 @@ pub struct RelayConfig {
|
|||||||
pub ws_port: Option<u16>,
|
pub ws_port: Option<u16>,
|
||||||
/// Directory to serve static files from (HTML/JS/WASM for web clients).
|
/// Directory to serve static files from (HTML/JS/WASM for web clients).
|
||||||
pub static_dir: Option<String>,
|
pub static_dir: Option<String>,
|
||||||
|
/// Federation peer relays.
|
||||||
|
#[serde(default)]
|
||||||
|
pub peers: Vec<PeerConfig>,
|
||||||
|
/// Global rooms bridged across federation.
|
||||||
|
#[serde(default)]
|
||||||
|
pub global_rooms: Vec<GlobalRoomConfig>,
|
||||||
|
/// Trusted relay fingerprints — accept inbound federation from these relays.
|
||||||
|
/// Unlike [[peers]], no url is needed — the peer connects to us.
|
||||||
|
#[serde(default)]
|
||||||
|
pub trusted: Vec<TrustedConfig>,
|
||||||
|
/// Debug tap: log packet headers for matching rooms ("*" = all rooms).
|
||||||
|
/// Activated via --debug-tap <room> or debug_tap = "room" in TOML.
|
||||||
|
pub debug_tap: Option<String>,
|
||||||
|
/// JSONL event log path for protocol analysis (--event-log).
|
||||||
|
#[serde(skip)]
|
||||||
|
pub event_log: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for RelayConfig {
|
impl Default for RelayConfig {
|
||||||
@@ -62,6 +111,100 @@ impl Default for RelayConfig {
|
|||||||
trunking_enabled: false,
|
trunking_enabled: false,
|
||||||
ws_port: None,
|
ws_port: None,
|
||||||
static_dir: None,
|
static_dir: None,
|
||||||
|
peers: Vec::new(),
|
||||||
|
global_rooms: Vec::new(),
|
||||||
|
trusted: Vec::new(),
|
||||||
|
debug_tap: None,
|
||||||
|
event_log: None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Load relay configuration from a TOML file.
|
||||||
|
pub fn load_config(path: &str) -> Result<RelayConfig, anyhow::Error> {
|
||||||
|
let content = std::fs::read_to_string(path)?;
|
||||||
|
let config: RelayConfig = toml::from_str(&content)?;
|
||||||
|
Ok(config)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Info about this relay instance, used to generate personalized example configs.
|
||||||
|
pub struct RelayInfo {
|
||||||
|
pub listen_addr: String,
|
||||||
|
pub tls_fingerprint: String,
|
||||||
|
pub public_ip: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load config from path, or create a personalized example config if it doesn't exist.
|
||||||
|
pub fn load_or_create_config(path: &str, info: Option<&RelayInfo>) -> Result<RelayConfig, anyhow::Error> {
|
||||||
|
let p = std::path::Path::new(path);
|
||||||
|
if p.exists() {
|
||||||
|
return load_config(path);
|
||||||
|
}
|
||||||
|
// Create parent directory if needed
|
||||||
|
if let Some(parent) = p.parent() {
|
||||||
|
std::fs::create_dir_all(parent)?;
|
||||||
|
}
|
||||||
|
// Generate personalized example config
|
||||||
|
let example = generate_example_config(info);
|
||||||
|
std::fs::write(p, &example)?;
|
||||||
|
eprintln!("Created example config at {path} — edit it and restart.");
|
||||||
|
let config: RelayConfig = toml::from_str(&example)?;
|
||||||
|
Ok(config)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate an example TOML config, personalized with this relay's info if available.
|
||||||
|
fn generate_example_config(info: Option<&RelayInfo>) -> String {
|
||||||
|
let listen = info.map(|i| i.listen_addr.as_str()).unwrap_or("0.0.0.0:4433");
|
||||||
|
let peer_example = if let Some(i) = info {
|
||||||
|
let ip = i.public_ip.as_deref().unwrap_or("this-relay-ip");
|
||||||
|
format!(
|
||||||
|
r#"# Other relays can peer with this relay using:
|
||||||
|
# [[peers]]
|
||||||
|
# url = "{ip}:{port}"
|
||||||
|
# fingerprint = "{fp}"
|
||||||
|
# label = "This Relay""#,
|
||||||
|
port = listen.rsplit(':').next().unwrap_or("4433"),
|
||||||
|
fp = i.tls_fingerprint,
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
"# To peer with another relay, add its url + fingerprint:".to_string()
|
||||||
|
};
|
||||||
|
|
||||||
|
format!(
|
||||||
|
r#"# WarzonePhone Relay Configuration
|
||||||
|
# See docs/ADMINISTRATION.md for full reference.
|
||||||
|
|
||||||
|
# Listen address for client connections
|
||||||
|
listen_addr = "{listen}"
|
||||||
|
|
||||||
|
# Maximum concurrent sessions
|
||||||
|
# max_sessions = 100
|
||||||
|
|
||||||
|
# Prometheus metrics endpoint (uncomment to enable)
|
||||||
|
# metrics_port = 9090
|
||||||
|
|
||||||
|
# featherChat auth endpoint (uncomment to enable)
|
||||||
|
# auth_url = "https://chat.example.com/v1/auth/validate"
|
||||||
|
|
||||||
|
{peer_example}
|
||||||
|
|
||||||
|
# Federation: peer relays we connect to (outbound)
|
||||||
|
# [[peers]]
|
||||||
|
# url = "other-relay.example.com:4433"
|
||||||
|
# fingerprint = "aa:bb:cc:dd:..."
|
||||||
|
# label = "Relay B"
|
||||||
|
|
||||||
|
# Federation: relays we trust inbound connections from
|
||||||
|
# [[trusted]]
|
||||||
|
# fingerprint = "ee:ff:00:11:..."
|
||||||
|
# label = "Relay X"
|
||||||
|
|
||||||
|
# Global rooms bridged across all federated peers
|
||||||
|
# [[global_rooms]]
|
||||||
|
# name = "general"
|
||||||
|
|
||||||
|
# Debug: log packet headers for a room ("*" for all)
|
||||||
|
# debug_tap = "*"
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|||||||
201
crates/wzp-relay/src/event_log.rs
Normal file
201
crates/wzp-relay/src/event_log.rs
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
//! JSONL event log for protocol analysis.
|
||||||
|
//!
|
||||||
|
//! When `--event-log <path>` is set, every media packet emits a structured
|
||||||
|
//! event at each decision point (recv, forward, drop, deliver).
|
||||||
|
//! Use `wzp-analyzer` to correlate events across multiple relays.
|
||||||
|
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use serde::Serialize;
|
||||||
|
use tokio::sync::mpsc;
|
||||||
|
use tracing::{error, info};
|
||||||
|
|
||||||
|
/// A single protocol event for JSONL output.
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
pub struct Event {
|
||||||
|
/// ISO 8601 timestamp with microseconds.
|
||||||
|
pub ts: String,
|
||||||
|
/// Event type.
|
||||||
|
pub event: &'static str,
|
||||||
|
/// Room name.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub room: Option<String>,
|
||||||
|
/// Source address or peer label.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub src: Option<String>,
|
||||||
|
/// Packet sequence number.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub seq: Option<u16>,
|
||||||
|
/// Codec identifier.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub codec: Option<String>,
|
||||||
|
/// FEC block ID.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub fec_block: Option<u8>,
|
||||||
|
/// FEC symbol index.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub fec_sym: Option<u8>,
|
||||||
|
/// Is FEC repair packet.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub repair: Option<bool>,
|
||||||
|
/// Payload length in bytes.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub len: Option<usize>,
|
||||||
|
/// Number of recipients.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub to_count: Option<usize>,
|
||||||
|
/// Peer label (for federation events).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub peer: Option<String>,
|
||||||
|
/// Drop/error reason.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub reason: Option<String>,
|
||||||
|
/// Presence action (active/inactive).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub action: Option<String>,
|
||||||
|
/// Participant count (presence events).
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub participants: Option<usize>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Event {
|
||||||
|
fn now() -> String {
|
||||||
|
chrono::Utc::now().format("%Y-%m-%dT%H:%M:%S%.6fZ").to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a minimal event with just type and timestamp.
|
||||||
|
pub fn new(event: &'static str) -> Self {
|
||||||
|
Self {
|
||||||
|
ts: Self::now(),
|
||||||
|
event,
|
||||||
|
room: None,
|
||||||
|
src: None,
|
||||||
|
seq: None,
|
||||||
|
codec: None,
|
||||||
|
fec_block: None,
|
||||||
|
fec_sym: None,
|
||||||
|
repair: None,
|
||||||
|
len: None,
|
||||||
|
to_count: None,
|
||||||
|
peer: None,
|
||||||
|
reason: None,
|
||||||
|
action: None,
|
||||||
|
participants: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set room.
|
||||||
|
pub fn room(mut self, room: &str) -> Self { self.room = Some(room.to_string()); self }
|
||||||
|
/// Set source.
|
||||||
|
pub fn src(mut self, src: &str) -> Self { self.src = Some(src.to_string()); self }
|
||||||
|
/// Set packet header fields from a MediaPacket.
|
||||||
|
pub fn packet(mut self, pkt: &wzp_proto::MediaPacket) -> Self {
|
||||||
|
self.seq = Some(pkt.header.seq);
|
||||||
|
self.codec = Some(format!("{:?}", pkt.header.codec_id));
|
||||||
|
self.fec_block = Some(pkt.header.fec_block);
|
||||||
|
self.fec_sym = Some(pkt.header.fec_symbol);
|
||||||
|
self.repair = Some(pkt.header.is_repair);
|
||||||
|
self.len = Some(pkt.payload.len());
|
||||||
|
self
|
||||||
|
}
|
||||||
|
/// Set seq only (when full packet not available).
|
||||||
|
pub fn seq(mut self, seq: u16) -> Self { self.seq = Some(seq); self }
|
||||||
|
/// Set payload length.
|
||||||
|
pub fn len(mut self, len: usize) -> Self { self.len = Some(len); self }
|
||||||
|
/// Set recipient count.
|
||||||
|
pub fn to_count(mut self, n: usize) -> Self { self.to_count = Some(n); self }
|
||||||
|
/// Set peer label.
|
||||||
|
pub fn peer(mut self, peer: &str) -> Self { self.peer = Some(peer.to_string()); self }
|
||||||
|
/// Set drop reason.
|
||||||
|
pub fn reason(mut self, reason: &str) -> Self { self.reason = Some(reason.to_string()); self }
|
||||||
|
/// Set presence action.
|
||||||
|
pub fn action(mut self, action: &str) -> Self { self.action = Some(action.to_string()); self }
|
||||||
|
/// Set participant count.
|
||||||
|
pub fn participants(mut self, n: usize) -> Self { self.participants = Some(n); self }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handle for emitting events. Cheap to clone.
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct EventLog {
|
||||||
|
tx: mpsc::UnboundedSender<Event>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl EventLog {
|
||||||
|
/// Emit an event (non-blocking, drops if channel is full).
|
||||||
|
pub fn emit(&self, event: Event) {
|
||||||
|
let _ = self.tx.send(event);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// No-op event log for when `--event-log` is not set.
|
||||||
|
/// All methods are no-ops that compile to nothing.
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct NoopEventLog;
|
||||||
|
|
||||||
|
/// Unified event log handle — either real or no-op.
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub enum EventLogger {
|
||||||
|
Active(EventLog),
|
||||||
|
Noop,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl EventLogger {
|
||||||
|
pub fn emit(&self, event: Event) {
|
||||||
|
if let EventLogger::Active(log) = self {
|
||||||
|
log.emit(event);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn is_active(&self) -> bool {
|
||||||
|
matches!(self, EventLogger::Active(_))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Start the event log writer. Returns an `EventLogger` handle.
|
||||||
|
pub fn start_event_log(path: Option<PathBuf>) -> EventLogger {
|
||||||
|
match path {
|
||||||
|
Some(path) => {
|
||||||
|
let (tx, rx) = mpsc::unbounded_channel();
|
||||||
|
tokio::spawn(writer_task(path, rx));
|
||||||
|
info!("event log enabled");
|
||||||
|
EventLogger::Active(EventLog { tx })
|
||||||
|
}
|
||||||
|
None => EventLogger::Noop,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Background task that writes events to a JSONL file.
|
||||||
|
async fn writer_task(path: PathBuf, mut rx: mpsc::UnboundedReceiver<Event>) {
|
||||||
|
use tokio::io::AsyncWriteExt;
|
||||||
|
|
||||||
|
let file = match tokio::fs::File::create(&path).await {
|
||||||
|
Ok(f) => f,
|
||||||
|
Err(e) => {
|
||||||
|
error!("failed to create event log {}: {e}", path.display());
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let mut writer = tokio::io::BufWriter::new(file);
|
||||||
|
let mut count: u64 = 0;
|
||||||
|
|
||||||
|
while let Some(event) = rx.recv().await {
|
||||||
|
match serde_json::to_string(&event) {
|
||||||
|
Ok(json) => {
|
||||||
|
if writer.write_all(json.as_bytes()).await.is_err() { break; }
|
||||||
|
if writer.write_all(b"\n").await.is_err() { break; }
|
||||||
|
count += 1;
|
||||||
|
// Flush every 100 events
|
||||||
|
if count % 100 == 0 {
|
||||||
|
let _ = writer.flush().await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("event log serialize error: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let _ = writer.flush().await;
|
||||||
|
info!(events = count, "event log closed");
|
||||||
|
}
|
||||||
966
crates/wzp-relay/src/federation.rs
Normal file
966
crates/wzp-relay/src/federation.rs
Normal file
@@ -0,0 +1,966 @@
|
|||||||
|
//! Relay federation — global room routing between peer relays.
|
||||||
|
//!
|
||||||
|
//! Each relay maintains a forwarding table per global room. When a local participant
|
||||||
|
//! sends media in a global room, it's forwarded to all peer relays that have the room
|
||||||
|
//! active. Incoming federated media is delivered to local participants and optionally
|
||||||
|
//! forwarded to other active peers (multi-hop).
|
||||||
|
|
||||||
|
use std::collections::{HashMap, HashSet};
|
||||||
|
use std::net::SocketAddr;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
|
use bytes::Bytes;
|
||||||
|
use sha2::{Sha256, Digest};
|
||||||
|
use tokio::sync::Mutex;
|
||||||
|
use tracing::{error, info, warn};
|
||||||
|
|
||||||
|
use wzp_proto::{MediaTransport, SignalMessage};
|
||||||
|
use wzp_transport::QuinnTransport;
|
||||||
|
|
||||||
|
use crate::config::{PeerConfig, TrustedConfig};
|
||||||
|
use crate::event_log::{Event, EventLogger};
|
||||||
|
use crate::room::{self, FederationMediaOut, RoomEvent, RoomManager};
|
||||||
|
|
||||||
|
/// Compute 8-byte room hash for federation datagram tagging.
|
||||||
|
pub fn room_hash(room_name: &str) -> [u8; 8] {
|
||||||
|
let h = Sha256::digest(room_name.as_bytes());
|
||||||
|
let mut out = [0u8; 8];
|
||||||
|
out.copy_from_slice(&h[..8]);
|
||||||
|
out
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Normalize a fingerprint string (remove colons, lowercase).
|
||||||
|
fn normalize_fp(fp: &str) -> String {
|
||||||
|
fp.replace(':', "").to_lowercase()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Time-based dedup filter for federation datagrams.
|
||||||
|
/// Tracks recently seen packets and expires entries older than 2 seconds.
|
||||||
|
/// This prevents duplicate delivery when the same packet arrives via
|
||||||
|
/// multiple federation paths, while allowing new senders that happen to
|
||||||
|
/// reuse the same seq numbers.
|
||||||
|
struct Deduplicator {
|
||||||
|
/// Recently seen packet keys with insertion time.
|
||||||
|
entries: HashMap<u64, Instant>,
|
||||||
|
/// Expiry duration.
|
||||||
|
ttl: Duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Deduplicator {
|
||||||
|
fn new(_capacity: usize) -> Self {
|
||||||
|
Self {
|
||||||
|
entries: HashMap::with_capacity(512),
|
||||||
|
ttl: Duration::from_secs(2),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns true if this packet is a duplicate (already seen within TTL).
|
||||||
|
fn is_dup(&mut self, room_hash: &[u8; 8], seq: u16, extra: u64) -> bool {
|
||||||
|
let key = u64::from_be_bytes(*room_hash) ^ (seq as u64) ^ extra;
|
||||||
|
let now = Instant::now();
|
||||||
|
|
||||||
|
// Periodic cleanup (every ~256 packets)
|
||||||
|
if self.entries.len() > 256 {
|
||||||
|
self.entries.retain(|_, ts| now.duration_since(*ts) < self.ttl);
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ts) = self.entries.get(&key) {
|
||||||
|
if now.duration_since(*ts) < self.ttl {
|
||||||
|
return true; // seen recently — duplicate
|
||||||
|
}
|
||||||
|
}
|
||||||
|
self.entries.insert(key, now);
|
||||||
|
false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Per-room token bucket rate limiter for federation forwarding.
|
||||||
|
struct RateLimiter {
|
||||||
|
/// Max packets per second per room.
|
||||||
|
max_pps: u32,
|
||||||
|
/// Tokens remaining in current window.
|
||||||
|
tokens: u32,
|
||||||
|
/// When the current window started.
|
||||||
|
window_start: Instant,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl RateLimiter {
|
||||||
|
fn new(max_pps: u32) -> Self {
|
||||||
|
Self {
|
||||||
|
max_pps,
|
||||||
|
tokens: max_pps,
|
||||||
|
window_start: Instant::now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns true if the packet should be allowed through.
|
||||||
|
fn allow(&mut self) -> bool {
|
||||||
|
let elapsed = self.window_start.elapsed();
|
||||||
|
if elapsed >= Duration::from_secs(1) {
|
||||||
|
self.tokens = self.max_pps;
|
||||||
|
self.window_start = Instant::now();
|
||||||
|
}
|
||||||
|
if self.tokens > 0 {
|
||||||
|
self.tokens -= 1;
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Active link to a peer relay.
|
||||||
|
struct PeerLink {
|
||||||
|
transport: Arc<QuinnTransport>,
|
||||||
|
label: String,
|
||||||
|
/// Global rooms that this peer has reported as active.
|
||||||
|
active_rooms: HashSet<String>,
|
||||||
|
/// Remote participants per room (for federated presence in RoomUpdate).
|
||||||
|
remote_participants: HashMap<String, Vec<wzp_proto::packet::RoomParticipant>>,
|
||||||
|
/// Last time we received any data (signal or media) from this peer.
|
||||||
|
last_seen: Instant,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Max federation packets per second per room (0 = unlimited).
|
||||||
|
const FEDERATION_RATE_LIMIT_PPS: u32 = 500;
|
||||||
|
/// Dedup window size (number of recent packets to remember).
|
||||||
|
const DEDUP_WINDOW_SIZE: usize = 4096;
|
||||||
|
/// Remote participants are considered stale after this duration with no updates.
|
||||||
|
const REMOTE_PARTICIPANT_STALE_SECS: u64 = 15;
|
||||||
|
|
||||||
|
/// Manages federation connections and global room forwarding.
|
||||||
|
pub struct FederationManager {
|
||||||
|
peers: Vec<PeerConfig>,
|
||||||
|
trusted: Vec<TrustedConfig>,
|
||||||
|
global_rooms: HashSet<String>,
|
||||||
|
room_mgr: Arc<Mutex<RoomManager>>,
|
||||||
|
endpoint: quinn::Endpoint,
|
||||||
|
local_tls_fp: String,
|
||||||
|
metrics: Arc<crate::metrics::RelayMetrics>,
|
||||||
|
/// Active peer connections, keyed by normalized fingerprint.
|
||||||
|
peer_links: Arc<Mutex<HashMap<String, PeerLink>>>,
|
||||||
|
/// Dedup filter for incoming federation datagrams.
|
||||||
|
dedup: Mutex<Deduplicator>,
|
||||||
|
/// Per-room seq counter for federation media delivered to local clients.
|
||||||
|
/// Ensures clients see monotonically increasing seq regardless of federation sender.
|
||||||
|
local_delivery_seq: std::sync::atomic::AtomicU16,
|
||||||
|
/// JSONL event log for protocol analysis.
|
||||||
|
event_log: EventLogger,
|
||||||
|
/// Per-room rate limiters for inbound federation media.
|
||||||
|
rate_limiters: Mutex<HashMap<String, RateLimiter>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl FederationManager {
|
||||||
|
pub fn new(
|
||||||
|
peers: Vec<PeerConfig>,
|
||||||
|
trusted: Vec<TrustedConfig>,
|
||||||
|
global_rooms: HashSet<String>,
|
||||||
|
room_mgr: Arc<Mutex<RoomManager>>,
|
||||||
|
endpoint: quinn::Endpoint,
|
||||||
|
local_tls_fp: String,
|
||||||
|
metrics: Arc<crate::metrics::RelayMetrics>,
|
||||||
|
event_log: EventLogger,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
peers,
|
||||||
|
trusted,
|
||||||
|
global_rooms,
|
||||||
|
room_mgr,
|
||||||
|
endpoint,
|
||||||
|
local_tls_fp,
|
||||||
|
metrics,
|
||||||
|
peer_links: Arc::new(Mutex::new(HashMap::new())),
|
||||||
|
dedup: Mutex::new(Deduplicator::new(DEDUP_WINDOW_SIZE)),
|
||||||
|
local_delivery_seq: std::sync::atomic::AtomicU16::new(0),
|
||||||
|
event_log,
|
||||||
|
rate_limiters: Mutex::new(HashMap::new()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a room name (which may be hashed) is a global room.
|
||||||
|
pub fn is_global_room(&self, room: &str) -> bool {
|
||||||
|
self.resolve_global_room(room).is_some()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Resolve a room name (raw or hashed) to the canonical global room name.
|
||||||
|
/// Returns the configured global room name if it matches.
|
||||||
|
pub fn resolve_global_room(&self, room: &str) -> Option<&str> {
|
||||||
|
// Direct match (raw room name, e.g. Android clients)
|
||||||
|
if self.global_rooms.contains(room) {
|
||||||
|
return Some(self.global_rooms.iter().find(|n| n.as_str() == room).unwrap());
|
||||||
|
}
|
||||||
|
// Hashed match (desktop clients hash room names for SNI privacy)
|
||||||
|
self.global_rooms.iter().find(|name| {
|
||||||
|
wzp_crypto::hash_room_name(name) == room
|
||||||
|
}).map(|s| s.as_str())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the canonical federation room hash for a room.
|
||||||
|
/// Always uses the configured global room name, not the client-provided name.
|
||||||
|
pub fn global_room_hash(&self, room: &str) -> [u8; 8] {
|
||||||
|
if let Some(canonical) = self.resolve_global_room(room) {
|
||||||
|
room_hash(canonical)
|
||||||
|
} else {
|
||||||
|
room_hash(room)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Start federation — spawns connection loops + event dispatcher.
|
||||||
|
pub async fn run(self: Arc<Self>) {
|
||||||
|
if self.peers.is_empty() && self.global_rooms.is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
info!(
|
||||||
|
peers = self.peers.len(),
|
||||||
|
global_rooms = self.global_rooms.len(),
|
||||||
|
"federation starting"
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut handles = Vec::new();
|
||||||
|
|
||||||
|
// Per-peer outbound connection loops
|
||||||
|
for peer in &self.peers {
|
||||||
|
let this = self.clone();
|
||||||
|
let peer = peer.clone();
|
||||||
|
handles.push(tokio::spawn(async move {
|
||||||
|
run_peer_loop(this, peer).await;
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Room event dispatcher
|
||||||
|
let room_events = {
|
||||||
|
let mgr = self.room_mgr.lock().await;
|
||||||
|
mgr.subscribe_events()
|
||||||
|
};
|
||||||
|
let this = self.clone();
|
||||||
|
handles.push(tokio::spawn(async move {
|
||||||
|
run_room_event_dispatcher(this, room_events).await;
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Stale presence sweeper — purges remote participants from dead peers
|
||||||
|
let this = self.clone();
|
||||||
|
handles.push(tokio::spawn(async move {
|
||||||
|
run_stale_presence_sweeper(this).await;
|
||||||
|
}));
|
||||||
|
|
||||||
|
for h in handles {
|
||||||
|
let _ = h.await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handle an inbound federation connection from a recognized peer.
|
||||||
|
pub async fn handle_inbound(
|
||||||
|
self: &Arc<Self>,
|
||||||
|
transport: Arc<QuinnTransport>,
|
||||||
|
peer_config: PeerConfig,
|
||||||
|
) {
|
||||||
|
let peer_fp = normalize_fp(&peer_config.fingerprint);
|
||||||
|
let label = peer_config.label.unwrap_or_else(|| peer_config.url.clone());
|
||||||
|
info!(peer = %label, "inbound federation link active");
|
||||||
|
if let Err(e) = run_federation_link(self.clone(), transport, peer_fp, label.clone()).await {
|
||||||
|
warn!(peer = %label, "inbound federation link ended: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get all remote participants for a room from all peer links.
|
||||||
|
/// Deduplicates by fingerprint (same participant may appear via multiple links).
|
||||||
|
pub async fn get_remote_participants(&self, room: &str) -> Vec<wzp_proto::packet::RoomParticipant> {
|
||||||
|
let canonical = self.resolve_global_room(room);
|
||||||
|
let links = self.peer_links.lock().await;
|
||||||
|
let mut result = Vec::new();
|
||||||
|
for link in links.values() {
|
||||||
|
// Check canonical name
|
||||||
|
if let Some(c) = canonical {
|
||||||
|
if let Some(remote) = link.remote_participants.get(c) {
|
||||||
|
result.extend(remote.iter().cloned());
|
||||||
|
}
|
||||||
|
// Also check raw room name, but only if different from canonical
|
||||||
|
if c != room {
|
||||||
|
if let Some(remote) = link.remote_participants.get(room) {
|
||||||
|
result.extend(remote.iter().cloned());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if let Some(remote) = link.remote_participants.get(room) {
|
||||||
|
result.extend(remote.iter().cloned());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Deduplicate by fingerprint
|
||||||
|
let mut seen = HashSet::new();
|
||||||
|
result.retain(|p| seen.insert(p.fingerprint.clone()));
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Forward locally-generated media to all connected peers.
|
||||||
|
/// For locally-originated media, we send to ALL peers (they decide whether to deliver).
|
||||||
|
/// For forwarded media (multi-hop), handle_datagram filters by active_rooms.
|
||||||
|
pub async fn forward_to_peers(&self, room_name: &str, room_hash: &[u8; 8], media_data: &Bytes) {
|
||||||
|
let links = self.peer_links.lock().await;
|
||||||
|
if links.is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
for (_fp, link) in links.iter() {
|
||||||
|
let mut tagged = Vec::with_capacity(8 + media_data.len());
|
||||||
|
tagged.extend_from_slice(room_hash);
|
||||||
|
tagged.extend_from_slice(media_data);
|
||||||
|
match link.transport.send_raw_datagram(&tagged) {
|
||||||
|
Ok(()) => {
|
||||||
|
self.metrics.federation_packets_forwarded
|
||||||
|
.with_label_values(&[&link.label, "out"]).inc();
|
||||||
|
}
|
||||||
|
Err(e) => warn!(peer = %link.label, "federation send error: {e}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Trust verification (kept from previous implementation) ──
|
||||||
|
|
||||||
|
pub fn find_peer_by_fingerprint(&self, fp: &str) -> Option<&PeerConfig> {
|
||||||
|
self.peers.iter().find(|p| normalize_fp(&p.fingerprint) == normalize_fp(fp))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_peer_by_addr(&self, addr: SocketAddr) -> Option<&PeerConfig> {
|
||||||
|
let addr_ip = addr.ip();
|
||||||
|
self.peers.iter().find(|p| {
|
||||||
|
p.url.parse::<SocketAddr>()
|
||||||
|
.map(|sa| sa.ip() == addr_ip)
|
||||||
|
.unwrap_or(false)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_trusted_by_fingerprint(&self, fp: &str) -> Option<&TrustedConfig> {
|
||||||
|
self.trusted.iter().find(|t| normalize_fp(&t.fingerprint) == normalize_fp(fp))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn check_inbound_trust(&self, addr: SocketAddr, hello_fp: &str) -> Option<String> {
|
||||||
|
if let Some(peer) = self.find_peer_by_addr(addr) {
|
||||||
|
return Some(peer.label.clone().unwrap_or_else(|| peer.url.clone()));
|
||||||
|
}
|
||||||
|
if let Some(trusted) = self.find_trusted_by_fingerprint(hello_fp) {
|
||||||
|
return Some(trusted.label.clone().unwrap_or_else(|| hello_fp[..16].to_string()));
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Outbound media egress task ──
|
||||||
|
|
||||||
|
/// Drains the federation media channel and forwards to active peers.
|
||||||
|
pub async fn run_federation_media_egress(
|
||||||
|
fm: Arc<FederationManager>,
|
||||||
|
mut rx: tokio::sync::mpsc::Receiver<FederationMediaOut>,
|
||||||
|
) {
|
||||||
|
let mut count: u64 = 0;
|
||||||
|
while let Some(out) = rx.recv().await {
|
||||||
|
count += 1;
|
||||||
|
if count == 1 || count % 250 == 0 {
|
||||||
|
info!(room = %out.room_name, count, "federation egress: forwarding media");
|
||||||
|
}
|
||||||
|
fm.forward_to_peers(&out.room_name, &out.room_hash, &out.data).await;
|
||||||
|
}
|
||||||
|
info!(total = count, "federation egress task ended");
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Room event dispatcher ──
|
||||||
|
|
||||||
|
/// Watches RoomManager events and sends GlobalRoomActive/Inactive to peers.
|
||||||
|
async fn run_room_event_dispatcher(
|
||||||
|
fm: Arc<FederationManager>,
|
||||||
|
mut events: tokio::sync::broadcast::Receiver<RoomEvent>,
|
||||||
|
) {
|
||||||
|
loop {
|
||||||
|
match events.recv().await {
|
||||||
|
Ok(RoomEvent::LocalJoin { room }) => {
|
||||||
|
if fm.is_global_room(&room) {
|
||||||
|
let participants = {
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
mgr.local_participant_list(&room)
|
||||||
|
};
|
||||||
|
info!(room = %room, count = participants.len(), "global room now active, announcing to peers");
|
||||||
|
let msg = SignalMessage::GlobalRoomActive { room, participants };
|
||||||
|
let links = fm.peer_links.lock().await;
|
||||||
|
for link in links.values() {
|
||||||
|
let _ = link.transport.send_signal(&msg).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(RoomEvent::LocalLeave { room }) => {
|
||||||
|
if fm.is_global_room(&room) {
|
||||||
|
info!(room = %room, "global room now inactive, announcing to peers");
|
||||||
|
let msg = SignalMessage::GlobalRoomInactive { room };
|
||||||
|
let links = fm.peer_links.lock().await;
|
||||||
|
for link in links.values() {
|
||||||
|
let _ = link.transport.send_signal(&msg).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(tokio::sync::broadcast::error::RecvError::Lagged(n)) => {
|
||||||
|
warn!(missed = n, "room event receiver lagged");
|
||||||
|
}
|
||||||
|
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Stale presence sweeper ──
|
||||||
|
|
||||||
|
/// Periodically checks for stale remote participants and purges them.
|
||||||
|
/// This handles the case where a peer link dies without sending GlobalRoomInactive
|
||||||
|
/// (e.g., QUIC timeout, network partition, crash).
|
||||||
|
async fn run_stale_presence_sweeper(fm: Arc<FederationManager>) {
|
||||||
|
let mut interval = tokio::time::interval(Duration::from_secs(5));
|
||||||
|
loop {
|
||||||
|
interval.tick().await;
|
||||||
|
let stale_threshold = Duration::from_secs(REMOTE_PARTICIPANT_STALE_SECS);
|
||||||
|
|
||||||
|
// Find peers with stale remote_participants whose link is also gone or idle
|
||||||
|
let stale_rooms: Vec<(String, String)> = {
|
||||||
|
let links = fm.peer_links.lock().await;
|
||||||
|
let mut stale = Vec::new();
|
||||||
|
for (fp, link) in links.iter() {
|
||||||
|
if link.last_seen.elapsed() > stale_threshold && !link.remote_participants.is_empty() {
|
||||||
|
for room in link.remote_participants.keys() {
|
||||||
|
stale.push((fp.clone(), room.clone()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
stale
|
||||||
|
};
|
||||||
|
|
||||||
|
if stale_rooms.is_empty() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Purge stale entries and collect affected rooms
|
||||||
|
let mut affected_rooms = HashSet::new();
|
||||||
|
{
|
||||||
|
let mut links = fm.peer_links.lock().await;
|
||||||
|
for (fp, room) in &stale_rooms {
|
||||||
|
if let Some(link) = links.get_mut(fp.as_str()) {
|
||||||
|
if link.last_seen.elapsed() > stale_threshold {
|
||||||
|
info!(peer = %link.label, room = %room, "purging stale remote participants (no data for {}s)", link.last_seen.elapsed().as_secs());
|
||||||
|
link.remote_participants.remove(room);
|
||||||
|
link.active_rooms.remove(room);
|
||||||
|
affected_rooms.insert(room.clone());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Broadcast updated RoomUpdate for affected rooms
|
||||||
|
for room in &affected_rooms {
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
for local_room in mgr.active_rooms() {
|
||||||
|
if fm.resolve_global_room(&local_room) == fm.resolve_global_room(room) {
|
||||||
|
let mut all_participants = mgr.local_participant_list(&local_room);
|
||||||
|
let remote = fm.get_remote_participants(&local_room).await;
|
||||||
|
all_participants.extend(remote);
|
||||||
|
let mut seen = HashSet::new();
|
||||||
|
all_participants.retain(|p| seen.insert(p.fingerprint.clone()));
|
||||||
|
let update = SignalMessage::RoomUpdate {
|
||||||
|
count: all_participants.len() as u32,
|
||||||
|
participants: all_participants,
|
||||||
|
};
|
||||||
|
let senders = mgr.local_senders(&local_room);
|
||||||
|
drop(mgr);
|
||||||
|
room::broadcast_signal(&senders, &update).await;
|
||||||
|
info!(room = %room, "swept stale presence — broadcast updated RoomUpdate");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Peer connection management ──
|
||||||
|
|
||||||
|
/// Persistent connection loop for one peer — reconnects with backoff.
|
||||||
|
async fn run_peer_loop(fm: Arc<FederationManager>, peer: PeerConfig) {
|
||||||
|
let mut backoff = Duration::from_secs(5);
|
||||||
|
loop {
|
||||||
|
info!(peer_url = %peer.url, label = ?peer.label, "federation: connecting to peer...");
|
||||||
|
match connect_to_peer(&fm, &peer).await {
|
||||||
|
Ok(transport) => {
|
||||||
|
backoff = Duration::from_secs(5);
|
||||||
|
let peer_fp = normalize_fp(&peer.fingerprint);
|
||||||
|
let label = peer.label.clone().unwrap_or_else(|| peer.url.clone());
|
||||||
|
if let Err(e) = run_federation_link(fm.clone(), transport, peer_fp, label).await {
|
||||||
|
warn!(peer_url = %peer.url, "federation link ended: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(peer_url = %peer.url, backoff_s = backoff.as_secs(), "federation connect failed: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tokio::time::sleep(backoff).await;
|
||||||
|
backoff = (backoff * 2).min(Duration::from_secs(300));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Connect to a peer relay and send hello.
|
||||||
|
async fn connect_to_peer(fm: &FederationManager, peer: &PeerConfig) -> Result<Arc<QuinnTransport>, anyhow::Error> {
|
||||||
|
let addr: SocketAddr = peer.url.parse()?;
|
||||||
|
let client_cfg = wzp_transport::client_config();
|
||||||
|
let conn = wzp_transport::connect(&fm.endpoint, addr, "_federation", client_cfg).await?;
|
||||||
|
let transport = Arc::new(QuinnTransport::new(conn));
|
||||||
|
|
||||||
|
// Send hello with our TLS fingerprint
|
||||||
|
let hello = SignalMessage::FederationHello {
|
||||||
|
tls_fingerprint: fm.local_tls_fp.clone(),
|
||||||
|
};
|
||||||
|
transport.send_signal(&hello).await
|
||||||
|
.map_err(|e| anyhow::anyhow!("federation hello send failed: {e}"))?;
|
||||||
|
|
||||||
|
info!(peer_url = %peer.url, label = ?peer.label, "federation: connected (hello sent)");
|
||||||
|
Ok(transport)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Federation link (runs on a single QUIC connection) ──
|
||||||
|
|
||||||
|
/// Run the federation link: exchange global room state and forward media.
|
||||||
|
async fn run_federation_link(
|
||||||
|
fm: Arc<FederationManager>,
|
||||||
|
transport: Arc<QuinnTransport>,
|
||||||
|
peer_fp: String,
|
||||||
|
peer_label: String,
|
||||||
|
) -> Result<(), anyhow::Error> {
|
||||||
|
// Register peer link + metrics
|
||||||
|
fm.metrics.federation_peer_status.with_label_values(&[&peer_label]).set(1);
|
||||||
|
{
|
||||||
|
let mut links = fm.peer_links.lock().await;
|
||||||
|
links.insert(peer_fp.clone(), PeerLink {
|
||||||
|
transport: transport.clone(),
|
||||||
|
label: peer_label.clone(),
|
||||||
|
active_rooms: HashSet::new(),
|
||||||
|
remote_participants: HashMap::new(),
|
||||||
|
last_seen: Instant::now(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Announce our currently active global rooms to this new peer
|
||||||
|
// Collect all announcements first, then send (avoid holding locks across await)
|
||||||
|
let announcements = {
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
let active = mgr.active_rooms();
|
||||||
|
let mut msgs = Vec::new();
|
||||||
|
|
||||||
|
// Local rooms
|
||||||
|
for room_name in &active {
|
||||||
|
if fm.is_global_room(room_name) {
|
||||||
|
let participants = mgr.local_participant_list(room_name);
|
||||||
|
info!(peer = %peer_label, room = %room_name, participants = participants.len(), "announcing local global room to new peer");
|
||||||
|
msgs.push(SignalMessage::GlobalRoomActive { room: room_name.clone(), participants });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remote rooms from OTHER peers (for multi-hop propagation)
|
||||||
|
let links = fm.peer_links.lock().await;
|
||||||
|
for (fp, link) in links.iter() {
|
||||||
|
if fp != &peer_fp {
|
||||||
|
for (room, participants) in &link.remote_participants {
|
||||||
|
if fm.is_global_room(room) {
|
||||||
|
info!(peer = %peer_label, room = %room, via = %link.label, "propagating remote room to new peer");
|
||||||
|
msgs.push(SignalMessage::GlobalRoomActive {
|
||||||
|
room: room.clone(),
|
||||||
|
participants: participants.clone(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
msgs
|
||||||
|
};
|
||||||
|
for msg in &announcements {
|
||||||
|
let _ = transport.send_signal(msg).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Three concurrent tasks: signal recv + media recv + RTT monitor
|
||||||
|
let signal_transport = transport.clone();
|
||||||
|
let media_transport = transport.clone();
|
||||||
|
let rtt_transport = transport.clone();
|
||||||
|
let fm_signal = fm.clone();
|
||||||
|
let fm_media = fm.clone();
|
||||||
|
let fm_rtt = fm.clone();
|
||||||
|
let peer_fp_signal = peer_fp.clone();
|
||||||
|
let peer_fp_media = peer_fp.clone();
|
||||||
|
let label_signal = peer_label.clone();
|
||||||
|
let label_rtt = peer_label.clone();
|
||||||
|
|
||||||
|
let signal_task = async move {
|
||||||
|
loop {
|
||||||
|
match signal_transport.recv_signal().await {
|
||||||
|
Ok(Some(msg)) => {
|
||||||
|
handle_signal(&fm_signal, &peer_fp_signal, &label_signal, msg).await;
|
||||||
|
}
|
||||||
|
Ok(None) => break,
|
||||||
|
Err(e) => {
|
||||||
|
error!(peer = %label_signal, "federation signal error: {e}");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let peer_label_media = peer_label.clone();
|
||||||
|
let media_task = async move {
|
||||||
|
let mut media_count: u64 = 0;
|
||||||
|
loop {
|
||||||
|
match media_transport.connection().read_datagram().await {
|
||||||
|
Ok(data) => {
|
||||||
|
media_count += 1;
|
||||||
|
if media_count == 1 || media_count % 250 == 0 {
|
||||||
|
info!(peer = %peer_label_media, media_count, len = data.len(), "federation: received datagram");
|
||||||
|
}
|
||||||
|
handle_datagram(&fm_media, &peer_fp_media, data).await;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
info!(peer = %peer_label_media, "federation media task ended: {e}");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// RTT monitor: periodically sample QUIC RTT for this peer
|
||||||
|
let rtt_task = async move {
|
||||||
|
loop {
|
||||||
|
tokio::time::sleep(Duration::from_secs(5)).await;
|
||||||
|
let rtt_ms = rtt_transport.connection().stats().path.rtt.as_millis() as f64;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
tokio::select! {
|
||||||
|
_ = signal_task => {}
|
||||||
|
_ = media_task => {}
|
||||||
|
_ = rtt_task => {}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup: remove peer link + metrics
|
||||||
|
fm.metrics.federation_peer_status.with_label_values(&[&peer_label]).set(0);
|
||||||
|
{
|
||||||
|
let mut links = fm.peer_links.lock().await;
|
||||||
|
links.remove(&peer_fp);
|
||||||
|
}
|
||||||
|
info!(peer = %peer_label, "federation link ended");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handle an incoming federation signal.
|
||||||
|
async fn handle_signal(
|
||||||
|
fm: &Arc<FederationManager>,
|
||||||
|
peer_fp: &str,
|
||||||
|
peer_label: &str,
|
||||||
|
msg: SignalMessage,
|
||||||
|
) {
|
||||||
|
// Update last_seen for this peer
|
||||||
|
{
|
||||||
|
let mut links = fm.peer_links.lock().await;
|
||||||
|
if let Some(link) = links.get_mut(peer_fp) {
|
||||||
|
link.last_seen = Instant::now();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match msg {
|
||||||
|
SignalMessage::GlobalRoomActive { room, participants } => {
|
||||||
|
if fm.is_global_room(&room) {
|
||||||
|
info!(peer = %peer_label, room = %room, remote_participants = participants.len(), "peer has global room active");
|
||||||
|
let mut links = fm.peer_links.lock().await;
|
||||||
|
if let Some(link) = links.get_mut(peer_fp) {
|
||||||
|
link.active_rooms.insert(room.clone());
|
||||||
|
}
|
||||||
|
// Update active rooms metric
|
||||||
|
let total: usize = links.values().map(|l| l.active_rooms.len()).sum();
|
||||||
|
fm.metrics.federation_active_rooms.set(total as i64);
|
||||||
|
if let Some(link) = links.get_mut(peer_fp) {
|
||||||
|
// Tag remote participants with their relay label
|
||||||
|
let tagged: Vec<_> = participants.iter().map(|p| {
|
||||||
|
let mut tagged = p.clone();
|
||||||
|
if tagged.relay_label.is_none() {
|
||||||
|
tagged.relay_label = Some(link.label.clone());
|
||||||
|
}
|
||||||
|
tagged
|
||||||
|
}).collect();
|
||||||
|
link.remote_participants.insert(room.clone(), tagged);
|
||||||
|
}
|
||||||
|
// Propagate to other peers (with relay labels preserved)
|
||||||
|
let tagged_for_propagation = if let Some(link) = links.get(peer_fp) {
|
||||||
|
let label = link.label.clone();
|
||||||
|
participants.iter().map(|p| {
|
||||||
|
let mut t = p.clone();
|
||||||
|
if t.relay_label.is_none() {
|
||||||
|
t.relay_label = Some(label.clone());
|
||||||
|
}
|
||||||
|
t
|
||||||
|
}).collect::<Vec<_>>()
|
||||||
|
} else {
|
||||||
|
participants.clone()
|
||||||
|
};
|
||||||
|
for (fp, link) in links.iter() {
|
||||||
|
if fp != peer_fp {
|
||||||
|
let _ = link.transport.send_signal(&SignalMessage::GlobalRoomActive {
|
||||||
|
room: room.clone(),
|
||||||
|
participants: tagged_for_propagation.clone(),
|
||||||
|
}).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
drop(links);
|
||||||
|
|
||||||
|
// Broadcast updated RoomUpdate to local clients in this room
|
||||||
|
// Find the local room name (may be hashed or raw)
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
for local_room in mgr.active_rooms() {
|
||||||
|
if fm.is_global_room(&local_room) && fm.resolve_global_room(&local_room) == fm.resolve_global_room(&room) {
|
||||||
|
// Build merged participant list: local + all remote (deduped)
|
||||||
|
let mut all_participants = mgr.local_participant_list(&local_room);
|
||||||
|
let links = fm.peer_links.lock().await;
|
||||||
|
for link in links.values() {
|
||||||
|
if let Some(canonical) = fm.resolve_global_room(&local_room) {
|
||||||
|
if let Some(remote) = link.remote_participants.get(canonical) {
|
||||||
|
all_participants.extend(remote.iter().cloned());
|
||||||
|
}
|
||||||
|
// Also check raw room name, but only if different from canonical
|
||||||
|
if canonical != local_room {
|
||||||
|
if let Some(remote) = link.remote_participants.get(&local_room) {
|
||||||
|
all_participants.extend(remote.iter().cloned());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Deduplicate by fingerprint
|
||||||
|
let mut seen = HashSet::new();
|
||||||
|
all_participants.retain(|p| seen.insert(p.fingerprint.clone()));
|
||||||
|
let update = SignalMessage::RoomUpdate {
|
||||||
|
count: all_participants.len() as u32,
|
||||||
|
participants: all_participants,
|
||||||
|
};
|
||||||
|
let senders = mgr.local_senders(&local_room);
|
||||||
|
drop(links);
|
||||||
|
drop(mgr);
|
||||||
|
room::broadcast_signal(&senders, &update).await;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
SignalMessage::GlobalRoomInactive { room } => {
|
||||||
|
info!(peer = %peer_label, room = %room, "peer global room now inactive");
|
||||||
|
let mut links = fm.peer_links.lock().await;
|
||||||
|
if let Some(link) = links.get_mut(peer_fp) {
|
||||||
|
link.active_rooms.remove(&room);
|
||||||
|
// Clear remote participants for this peer+room
|
||||||
|
link.remote_participants.remove(&room);
|
||||||
|
// Also try canonical name
|
||||||
|
if let Some(canonical) = fm.resolve_global_room(&room) {
|
||||||
|
link.remote_participants.remove(canonical);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update active rooms metric
|
||||||
|
let total: usize = links.values().map(|l| l.active_rooms.len()).sum();
|
||||||
|
fm.metrics.federation_active_rooms.set(total as i64);
|
||||||
|
|
||||||
|
// Build remaining remote participants (from all peers except the one going inactive)
|
||||||
|
let remaining_remote: Vec<wzp_proto::packet::RoomParticipant> = {
|
||||||
|
let canonical = fm.resolve_global_room(&room);
|
||||||
|
let mut result = Vec::new();
|
||||||
|
for (fp, link) in links.iter() {
|
||||||
|
if fp == peer_fp { continue; }
|
||||||
|
if let Some(c) = canonical {
|
||||||
|
if let Some(remote) = link.remote_participants.get(c) {
|
||||||
|
result.extend(remote.iter().cloned());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let mut seen = HashSet::new();
|
||||||
|
result.retain(|p| seen.insert(p.fingerprint.clone()));
|
||||||
|
result
|
||||||
|
};
|
||||||
|
|
||||||
|
// Propagate to other peers: send updated GlobalRoomActive with revised list,
|
||||||
|
// or GlobalRoomInactive if no participants remain anywhere
|
||||||
|
let local_active = {
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
mgr.active_rooms().iter().any(|r| fm.resolve_global_room(r) == fm.resolve_global_room(&room))
|
||||||
|
};
|
||||||
|
let has_remaining = !remaining_remote.is_empty() || local_active;
|
||||||
|
|
||||||
|
// Collect peer transports to send to (avoid holding lock across await)
|
||||||
|
let peer_sends: Vec<_> = links.iter()
|
||||||
|
.filter(|(fp, _)| *fp != peer_fp)
|
||||||
|
.map(|(_, link)| link.transport.clone())
|
||||||
|
.collect();
|
||||||
|
drop(links);
|
||||||
|
|
||||||
|
if has_remaining {
|
||||||
|
// Send updated participant list to other peers
|
||||||
|
let mut updated_participants = remaining_remote.clone();
|
||||||
|
if local_active {
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
for local_room in mgr.active_rooms() {
|
||||||
|
if fm.resolve_global_room(&local_room) == fm.resolve_global_room(&room) {
|
||||||
|
updated_participants.extend(mgr.local_participant_list(&local_room));
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let msg = SignalMessage::GlobalRoomActive {
|
||||||
|
room: room.clone(),
|
||||||
|
participants: updated_participants,
|
||||||
|
};
|
||||||
|
for transport in &peer_sends {
|
||||||
|
let _ = transport.send_signal(&msg).await;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// No participants left anywhere — propagate inactive
|
||||||
|
let msg = SignalMessage::GlobalRoomInactive { room: room.clone() };
|
||||||
|
for transport in &peer_sends {
|
||||||
|
let _ = transport.send_signal(&msg).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Broadcast updated RoomUpdate to local clients (remote participant removed)
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
for local_room in mgr.active_rooms() {
|
||||||
|
if fm.is_global_room(&local_room) && fm.resolve_global_room(&local_room) == fm.resolve_global_room(&room) {
|
||||||
|
let mut all_participants = mgr.local_participant_list(&local_room);
|
||||||
|
all_participants.extend(remaining_remote.iter().cloned());
|
||||||
|
// Deduplicate by fingerprint
|
||||||
|
let mut seen = HashSet::new();
|
||||||
|
all_participants.retain(|p| seen.insert(p.fingerprint.clone()));
|
||||||
|
let update = SignalMessage::RoomUpdate {
|
||||||
|
count: all_participants.len() as u32,
|
||||||
|
participants: all_participants,
|
||||||
|
};
|
||||||
|
let senders = mgr.local_senders(&local_room);
|
||||||
|
drop(mgr);
|
||||||
|
room::broadcast_signal(&senders, &update).await;
|
||||||
|
info!(room = %room, "broadcast updated presence (remote participant removed)");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => {} // ignore other signals
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handle an incoming federation datagram (room-hash-tagged media).
|
||||||
|
async fn handle_datagram(
|
||||||
|
fm: &Arc<FederationManager>,
|
||||||
|
source_peer_fp: &str,
|
||||||
|
data: Bytes,
|
||||||
|
) {
|
||||||
|
if data.len() < 12 { return; } // 8-byte hash + min packet
|
||||||
|
|
||||||
|
let mut rh = [0u8; 8];
|
||||||
|
rh.copy_from_slice(&data[..8]);
|
||||||
|
let media_bytes = data.slice(8..);
|
||||||
|
|
||||||
|
let pkt = match wzp_proto::MediaPacket::from_bytes(media_bytes.clone()) {
|
||||||
|
Some(pkt) => pkt,
|
||||||
|
None => {
|
||||||
|
fm.event_log.emit(Event::new("federation_ingress_malformed").len(data.len()));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Event log: federation ingress
|
||||||
|
let peer_label = {
|
||||||
|
let links = fm.peer_links.lock().await;
|
||||||
|
links.get(source_peer_fp).map(|l| l.label.clone()).unwrap_or_default()
|
||||||
|
};
|
||||||
|
fm.event_log.emit(Event::new("federation_ingress").packet(&pkt).peer(&peer_label));
|
||||||
|
|
||||||
|
// Count inbound federation packet + update last_seen
|
||||||
|
fm.metrics.federation_packets_forwarded
|
||||||
|
.with_label_values(&[source_peer_fp, "in"]).inc();
|
||||||
|
{
|
||||||
|
let mut links = fm.peer_links.lock().await;
|
||||||
|
if let Some(link) = links.get_mut(source_peer_fp) {
|
||||||
|
link.last_seen = Instant::now();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dedup: drop packets we've already seen (multi-path duplicates).
|
||||||
|
// Key uses a hash of the actual payload bytes — unique per Opus frame,
|
||||||
|
// so different senders with the same seq/timestamp never collide.
|
||||||
|
let payload_hash = {
|
||||||
|
let mut h = 0u64;
|
||||||
|
for (i, &b) in media_bytes.iter().take(16).enumerate() {
|
||||||
|
h ^= (b as u64) << ((i % 8) * 8);
|
||||||
|
}
|
||||||
|
h
|
||||||
|
};
|
||||||
|
{
|
||||||
|
let mut dedup = fm.dedup.lock().await;
|
||||||
|
if dedup.is_dup(&rh, pkt.header.seq, payload_hash) {
|
||||||
|
fm.event_log.emit(Event::new("dedup_drop").seq(pkt.header.seq).peer(&peer_label));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find room by hash — check local rooms AND global room config
|
||||||
|
let room_name = {
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
let active = mgr.active_rooms();
|
||||||
|
// First: check local rooms (has participants)
|
||||||
|
active.iter().find(|r| room_hash(r) == rh).cloned()
|
||||||
|
.or_else(|| active.iter().find(|r| fm.global_room_hash(r) == rh).cloned())
|
||||||
|
// Second: check global room config (hub relay may have no local participants)
|
||||||
|
.or_else(|| {
|
||||||
|
fm.global_rooms.iter().find(|name| room_hash(name) == rh).cloned()
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
let room_name = match room_name {
|
||||||
|
Some(r) => r,
|
||||||
|
None => {
|
||||||
|
fm.event_log.emit(Event::new("room_not_found").seq(pkt.header.seq).peer(&peer_label));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Rate limit per room
|
||||||
|
if FEDERATION_RATE_LIMIT_PPS > 0 {
|
||||||
|
let mut limiters = fm.rate_limiters.lock().await;
|
||||||
|
let limiter = limiters.entry(room_name.clone())
|
||||||
|
.or_insert_with(|| RateLimiter::new(FEDERATION_RATE_LIMIT_PPS));
|
||||||
|
if !limiter.allow() {
|
||||||
|
fm.event_log.emit(Event::new("rate_limit_drop").room(&room_name).seq(pkt.header.seq));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Deliver to all local participants — forward the raw bytes as-is.
|
||||||
|
// The original sender's MediaPacket is preserved exactly (no re-serialization).
|
||||||
|
let locals = {
|
||||||
|
let mgr = fm.room_mgr.lock().await;
|
||||||
|
mgr.local_senders(&room_name)
|
||||||
|
};
|
||||||
|
for sender in &locals {
|
||||||
|
match sender {
|
||||||
|
room::ParticipantSender::Quic(t) => {
|
||||||
|
if let Err(e) = t.send_raw_datagram(&media_bytes) {
|
||||||
|
fm.event_log.emit(Event::new("local_deliver_error").room(&room_name).seq(pkt.header.seq).reason(&e.to_string()));
|
||||||
|
warn!("federation local delivery error: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
room::ParticipantSender::WebSocket(_) => { let _ = sender.send_raw(&pkt.payload).await; }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fm.event_log.emit(Event::new("local_deliver").room(&room_name).seq(pkt.header.seq).to_count(locals.len()));
|
||||||
|
|
||||||
|
// Multi-hop: forward to ALL other connected peers (not the source)
|
||||||
|
// Don't filter by active_rooms — the receiving peer decides whether to deliver
|
||||||
|
let links = fm.peer_links.lock().await;
|
||||||
|
for (fp, link) in links.iter() {
|
||||||
|
if fp != source_peer_fp {
|
||||||
|
let mut tagged = Vec::with_capacity(8 + media_bytes.len());
|
||||||
|
tagged.extend_from_slice(&rh);
|
||||||
|
tagged.extend_from_slice(&media_bytes);
|
||||||
|
let _ = link.transport.send_raw_datagram(&tagged);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -78,31 +78,26 @@ pub async fn accept_handshake(
|
|||||||
};
|
};
|
||||||
transport.send_signal(&answer).await?;
|
transport.send_signal(&answer).await?;
|
||||||
|
|
||||||
// Derive caller fingerprint from their identity public key (first 8 bytes as hex)
|
// Derive caller fingerprint: SHA-256(Ed25519 pub)[:16], formatted as xxxx:xxxx:...
|
||||||
let caller_fp = caller_identity_pub[..8]
|
// Must match the format used in signal registration and presence.
|
||||||
.iter()
|
let caller_fp = {
|
||||||
.map(|b| format!("{b:02x}"))
|
use sha2::{Sha256, Digest};
|
||||||
.collect::<String>();
|
let hash = Sha256::digest(&caller_identity_pub);
|
||||||
|
let fp = wzp_crypto::Fingerprint([
|
||||||
|
hash[0], hash[1], hash[2], hash[3], hash[4], hash[5], hash[6], hash[7],
|
||||||
|
hash[8], hash[9], hash[10], hash[11], hash[12], hash[13], hash[14], hash[15],
|
||||||
|
]);
|
||||||
|
fp.to_string()
|
||||||
|
};
|
||||||
|
|
||||||
Ok((session, chosen_profile, caller_fp, caller_alias))
|
Ok((session, chosen_profile, caller_fp, caller_alias))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Select the best quality profile from those the caller supports.
|
/// Select the best quality profile from those the caller supports.
|
||||||
fn choose_profile(supported: &[QualityProfile]) -> QualityProfile {
|
fn choose_profile(supported: &[QualityProfile]) -> QualityProfile {
|
||||||
// Prefer higher-quality profiles. Use GOOD as default if supported list is empty.
|
// Cap at GOOD (24k) for now — studio tiers (32k/48k/64k) not yet tested
|
||||||
if supported.is_empty() {
|
// for federation reliability (large packets may exceed path MTU).
|
||||||
return QualityProfile::GOOD;
|
QualityProfile::GOOD
|
||||||
}
|
|
||||||
// Pick the profile with the highest bitrate.
|
|
||||||
supported
|
|
||||||
.iter()
|
|
||||||
.max_by(|a, b| {
|
|
||||||
a.total_bitrate_kbps()
|
|
||||||
.partial_cmp(&b.total_bitrate_kbps())
|
|
||||||
.unwrap_or(std::cmp::Ordering::Equal)
|
|
||||||
})
|
|
||||||
.copied()
|
|
||||||
.unwrap_or(QualityProfile::GOOD)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
|||||||
@@ -8,7 +8,11 @@
|
|||||||
//! quality transitions.
|
//! quality transitions.
|
||||||
|
|
||||||
pub mod auth;
|
pub mod auth;
|
||||||
|
pub mod call_registry;
|
||||||
pub mod config;
|
pub mod config;
|
||||||
|
pub mod event_log;
|
||||||
|
pub mod federation;
|
||||||
|
pub mod signal_hub;
|
||||||
pub mod handshake;
|
pub mod handshake;
|
||||||
pub mod metrics;
|
pub mod metrics;
|
||||||
pub mod pipeline;
|
pub mod pipeline;
|
||||||
|
|||||||
@@ -13,9 +13,9 @@ use std::sync::Arc;
|
|||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use tokio::sync::Mutex;
|
use tokio::sync::Mutex;
|
||||||
use tracing::{error, info};
|
use tracing::{error, info, warn};
|
||||||
|
|
||||||
use wzp_proto::MediaTransport;
|
use wzp_proto::{MediaTransport, SignalMessage};
|
||||||
use wzp_relay::config::RelayConfig;
|
use wzp_relay::config::RelayConfig;
|
||||||
use wzp_relay::metrics::RelayMetrics;
|
use wzp_relay::metrics::RelayMetrics;
|
||||||
use wzp_relay::pipeline::{PipelineConfig, RelayPipeline};
|
use wzp_relay::pipeline::{PipelineConfig, RelayPipeline};
|
||||||
@@ -23,12 +23,54 @@ use wzp_relay::presence::PresenceRegistry;
|
|||||||
use wzp_relay::room::{self, RoomManager};
|
use wzp_relay::room::{self, RoomManager};
|
||||||
use wzp_relay::session_mgr::SessionManager;
|
use wzp_relay::session_mgr::SessionManager;
|
||||||
|
|
||||||
fn parse_args() -> RelayConfig {
|
/// Parsed CLI result — config + identity path.
|
||||||
let mut config = RelayConfig::default();
|
struct CliResult {
|
||||||
|
config: RelayConfig,
|
||||||
|
identity_path: Option<String>,
|
||||||
|
config_file: Option<String>,
|
||||||
|
config_needs_create: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_args() -> CliResult {
|
||||||
let args: Vec<String> = std::env::args().collect();
|
let args: Vec<String> = std::env::args().collect();
|
||||||
|
|
||||||
|
// First pass: extract --config and --identity
|
||||||
|
let mut config_file = None;
|
||||||
|
let mut identity_path = None;
|
||||||
let mut i = 1;
|
let mut i = 1;
|
||||||
while i < args.len() {
|
while i < args.len() {
|
||||||
match args[i].as_str() {
|
match args[i].as_str() {
|
||||||
|
"--config" | "-c" => { i += 1; config_file = args.get(i).cloned(); }
|
||||||
|
"--identity" | "-i" => { i += 1; identity_path = args.get(i).cloned(); }
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
i += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Track if we need to create the config after identity is known
|
||||||
|
let config_needs_create = config_file.as_ref().map(|p| !std::path::Path::new(p).exists()).unwrap_or(false);
|
||||||
|
|
||||||
|
let mut config = if let Some(ref path) = config_file {
|
||||||
|
if config_needs_create {
|
||||||
|
// Will be re-created with personalized info after identity is loaded
|
||||||
|
RelayConfig::default()
|
||||||
|
} else {
|
||||||
|
wzp_relay::config::load_config(path)
|
||||||
|
.unwrap_or_else(|e| {
|
||||||
|
eprintln!("failed to load config from {path}: {e}");
|
||||||
|
std::process::exit(1);
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
RelayConfig::default()
|
||||||
|
};
|
||||||
|
|
||||||
|
// CLI flags override config file values
|
||||||
|
let mut i = 1;
|
||||||
|
while i < args.len() {
|
||||||
|
match args[i].as_str() {
|
||||||
|
"--config" | "-c" => { i += 1; } // already handled
|
||||||
|
"--identity" | "-i" => { i += 1; } // already handled
|
||||||
"--listen" => {
|
"--listen" => {
|
||||||
i += 1;
|
i += 1;
|
||||||
config.listen_addr = args.get(i).expect("--listen requires an address")
|
config.listen_addr = args.get(i).expect("--listen requires an address")
|
||||||
@@ -81,6 +123,28 @@ fn parse_args() -> RelayConfig {
|
|||||||
args.get(i).expect("--static-dir requires a directory path").to_string(),
|
args.get(i).expect("--static-dir requires a directory path").to_string(),
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
"--global-room" => {
|
||||||
|
i += 1;
|
||||||
|
config.global_rooms.push(wzp_relay::config::GlobalRoomConfig {
|
||||||
|
name: args.get(i).expect("--global-room requires a room name").to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
"--debug-tap" => {
|
||||||
|
i += 1;
|
||||||
|
config.debug_tap = Some(
|
||||||
|
args.get(i).expect("--debug-tap requires a room name (or '*' for all)").to_string(),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"--event-log" => {
|
||||||
|
i += 1;
|
||||||
|
config.event_log = Some(
|
||||||
|
args.get(i).expect("--event-log requires a file path").to_string(),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"--version" | "-V" => {
|
||||||
|
println!("wzp-relay {}", env!("WZP_BUILD_HASH"));
|
||||||
|
std::process::exit(0);
|
||||||
|
}
|
||||||
"--mesh-status" => {
|
"--mesh-status" => {
|
||||||
// Print mesh table from a fresh registry and exit.
|
// Print mesh table from a fresh registry and exit.
|
||||||
// In practice this is useful after the relay has been running;
|
// In practice this is useful after the relay has been running;
|
||||||
@@ -90,9 +154,11 @@ fn parse_args() -> RelayConfig {
|
|||||||
std::process::exit(0);
|
std::process::exit(0);
|
||||||
}
|
}
|
||||||
"--help" | "-h" => {
|
"--help" | "-h" => {
|
||||||
eprintln!("Usage: wzp-relay [--listen <addr>] [--remote <addr>] [--auth-url <url>] [--metrics-port <port>] [--probe <addr>]... [--probe-mesh] [--mesh-status]");
|
eprintln!("Usage: wzp-relay [--config <path>] [--listen <addr>] [--remote <addr>] [--auth-url <url>] [--metrics-port <port>] [--probe <addr>]... [--probe-mesh] [--mesh-status]");
|
||||||
eprintln!();
|
eprintln!();
|
||||||
eprintln!("Options:");
|
eprintln!("Options:");
|
||||||
|
eprintln!(" -c, --config <path> Load config from TOML file (creates example if missing)");
|
||||||
|
eprintln!(" -i, --identity <path> Identity file path (creates if missing, uses OsRng)");
|
||||||
eprintln!(" --listen <addr> Listen address (default: 0.0.0.0:4433)");
|
eprintln!(" --listen <addr> Listen address (default: 0.0.0.0:4433)");
|
||||||
eprintln!(" --remote <addr> Remote relay for forwarding (disables room mode)");
|
eprintln!(" --remote <addr> Remote relay for forwarding (disables room mode)");
|
||||||
eprintln!(" --auth-url <url> featherChat auth endpoint (e.g., https://chat.example.com/v1/auth/validate)");
|
eprintln!(" --auth-url <url> featherChat auth endpoint (e.g., https://chat.example.com/v1/auth/validate)");
|
||||||
@@ -102,6 +168,8 @@ fn parse_args() -> RelayConfig {
|
|||||||
eprintln!(" --probe-mesh Enable mesh mode (mark config flag, probes all --probe targets).");
|
eprintln!(" --probe-mesh Enable mesh mode (mark config flag, probes all --probe targets).");
|
||||||
eprintln!(" --mesh-status Print mesh health table and exit (diagnostic).");
|
eprintln!(" --mesh-status Print mesh health table and exit (diagnostic).");
|
||||||
eprintln!(" --trunking Enable trunk batching for outgoing media in room mode.");
|
eprintln!(" --trunking Enable trunk batching for outgoing media in room mode.");
|
||||||
|
eprintln!(" --global-room <name> Declare a room as global (bridged across federation). Repeatable.");
|
||||||
|
eprintln!(" --debug-tap <room> Log packet headers for a room ('*' for all rooms).");
|
||||||
eprintln!(" --ws-port <port> WebSocket listener port for browser clients (e.g., 8080).");
|
eprintln!(" --ws-port <port> WebSocket listener port for browser clients (e.g., 8080).");
|
||||||
eprintln!(" --static-dir <dir> Directory to serve static files from (HTML/JS/WASM).");
|
eprintln!(" --static-dir <dir> Directory to serve static files from (HTML/JS/WASM).");
|
||||||
eprintln!();
|
eprintln!();
|
||||||
@@ -116,7 +184,7 @@ fn parse_args() -> RelayConfig {
|
|||||||
}
|
}
|
||||||
i += 1;
|
i += 1;
|
||||||
}
|
}
|
||||||
config
|
CliResult { config, identity_path, config_file, config_needs_create }
|
||||||
}
|
}
|
||||||
|
|
||||||
struct RelayStats {
|
struct RelayStats {
|
||||||
@@ -184,10 +252,29 @@ async fn run_downstream(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Detect a non-loopback IP address from local interfaces.
|
||||||
|
/// Prefers public IPs over private (10.x, 172.16-31.x, 192.168.x).
|
||||||
|
fn detect_public_ip() -> Option<String> {
|
||||||
|
use std::net::UdpSocket;
|
||||||
|
// Connect to a public address to find our outbound IP (doesn't actually send anything)
|
||||||
|
if let Ok(socket) = UdpSocket::bind("0.0.0.0:0") {
|
||||||
|
if socket.connect("8.8.8.8:80").is_ok() {
|
||||||
|
if let Ok(addr) = socket.local_addr() {
|
||||||
|
return Some(addr.ip().to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build-time git hash, set by build.rs or env.
|
||||||
|
const BUILD_GIT_HASH: &str = env!("WZP_BUILD_HASH");
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
async fn main() -> anyhow::Result<()> {
|
async fn main() -> anyhow::Result<()> {
|
||||||
let config = parse_args();
|
let CliResult { mut config, identity_path, config_file, config_needs_create } = parse_args();
|
||||||
tracing_subscriber::fmt().init();
|
tracing_subscriber::fmt().init();
|
||||||
|
info!(version = BUILD_GIT_HASH, "wzp-relay build");
|
||||||
rustls::crypto::ring::default_provider()
|
rustls::crypto::ring::default_provider()
|
||||||
.install_default()
|
.install_default()
|
||||||
.expect("failed to install rustls crypto provider");
|
.expect("failed to install rustls crypto provider");
|
||||||
@@ -207,12 +294,88 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
tokio::spawn(wzp_relay::metrics::serve_metrics(port, m, p, rr));
|
tokio::spawn(wzp_relay::metrics::serve_metrics(port, m, p, rr));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Generate ephemeral relay identity for crypto handshake
|
// Load or generate relay identity
|
||||||
let relay_seed = wzp_crypto::Seed::generate();
|
let relay_seed = {
|
||||||
|
let id_path = match identity_path {
|
||||||
|
Some(ref p) => std::path::PathBuf::from(p),
|
||||||
|
None => dirs::home_dir()
|
||||||
|
.unwrap_or_else(|| std::path::PathBuf::from("."))
|
||||||
|
.join(".wzp")
|
||||||
|
.join("relay-identity"),
|
||||||
|
};
|
||||||
|
if id_path.exists() {
|
||||||
|
if let Ok(hex) = std::fs::read_to_string(&id_path) {
|
||||||
|
if let Ok(s) = wzp_crypto::Seed::from_hex(hex.trim()) {
|
||||||
|
info!("loaded relay identity from {}", id_path.display());
|
||||||
|
s
|
||||||
|
} else {
|
||||||
|
warn!("corrupt identity file {}, generating new", id_path.display());
|
||||||
|
let s = wzp_crypto::Seed::generate();
|
||||||
|
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
|
||||||
|
let _ = std::fs::write(&id_path, &hex);
|
||||||
|
s
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
let s = wzp_crypto::Seed::generate();
|
||||||
|
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
|
||||||
|
let _ = std::fs::write(&id_path, &hex);
|
||||||
|
s
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
let s = wzp_crypto::Seed::generate();
|
||||||
|
if let Some(parent) = id_path.parent() {
|
||||||
|
let _ = std::fs::create_dir_all(parent);
|
||||||
|
}
|
||||||
|
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
|
||||||
|
let _ = std::fs::write(&id_path, &hex);
|
||||||
|
info!("generated relay identity at {}", id_path.display());
|
||||||
|
s
|
||||||
|
}
|
||||||
|
};
|
||||||
let relay_fp = relay_seed.derive_identity().public_identity().fingerprint;
|
let relay_fp = relay_seed.derive_identity().public_identity().fingerprint;
|
||||||
info!(addr = %config.listen_addr, fingerprint = %relay_fp, "WarzonePhone relay starting");
|
info!(addr = %config.listen_addr, fingerprint = %relay_fp, "WarzonePhone relay starting");
|
||||||
|
|
||||||
let (server_config, _cert) = wzp_transport::server_config();
|
let (server_config, cert_der) = wzp_transport::server_config_from_seed(&relay_seed.0);
|
||||||
|
let tls_fp = wzp_transport::tls_fingerprint(&cert_der);
|
||||||
|
info!(tls_fingerprint = %tls_fp, "TLS certificate (deterministic from relay identity)");
|
||||||
|
|
||||||
|
// Create personalized config file if it was missing
|
||||||
|
let public_ip = detect_public_ip();
|
||||||
|
if config_needs_create {
|
||||||
|
if let Some(ref path) = config_file {
|
||||||
|
let info = wzp_relay::config::RelayInfo {
|
||||||
|
listen_addr: config.listen_addr.to_string(),
|
||||||
|
tls_fingerprint: tls_fp.clone(),
|
||||||
|
public_ip: public_ip.clone(),
|
||||||
|
};
|
||||||
|
if let Err(e) = wzp_relay::config::load_or_create_config(path, Some(&info)) {
|
||||||
|
warn!("failed to create config: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Print federation hint with our public IP + listen port + TLS fingerprint
|
||||||
|
let listen_port = config.listen_addr.port();
|
||||||
|
if let Some(ip) = &public_ip {
|
||||||
|
info!("federation: to peer with this relay, add to relay.toml:");
|
||||||
|
info!(" [[peers]]");
|
||||||
|
info!(" url = \"{ip}:{listen_port}\"");
|
||||||
|
info!(" fingerprint = \"{tls_fp}\"");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Log configured peers and trusted relays
|
||||||
|
if !config.peers.is_empty() {
|
||||||
|
info!(count = config.peers.len(), "federation peers configured");
|
||||||
|
for p in &config.peers {
|
||||||
|
info!(url = %p.url, label = ?p.label, " peer");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !config.trusted.is_empty() {
|
||||||
|
info!(count = config.trusted.len(), "trusted relays configured");
|
||||||
|
for t in &config.trusted {
|
||||||
|
info!(fingerprint = %t.fingerprint, label = ?t.label, " trusted");
|
||||||
|
}
|
||||||
|
}
|
||||||
let endpoint = wzp_transport::create_endpoint(config.listen_addr, Some(server_config))?;
|
let endpoint = wzp_transport::create_endpoint(config.listen_addr, Some(server_config))?;
|
||||||
|
|
||||||
// Forward mode
|
// Forward mode
|
||||||
@@ -230,9 +393,41 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
// Room manager (room mode only)
|
// Room manager (room mode only)
|
||||||
let room_mgr = Arc::new(Mutex::new(RoomManager::new()));
|
let room_mgr = Arc::new(Mutex::new(RoomManager::new()));
|
||||||
|
|
||||||
|
// Event log for protocol analysis
|
||||||
|
let event_log = wzp_relay::event_log::start_event_log(
|
||||||
|
config.event_log.as_ref().map(std::path::PathBuf::from)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Federation manager
|
||||||
|
let global_room_set: std::collections::HashSet<String> = config.global_rooms.iter()
|
||||||
|
.map(|g| g.name.clone())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let federation_mgr = if !config.peers.is_empty() || !config.trusted.is_empty() || !global_room_set.is_empty() {
|
||||||
|
let fm = Arc::new(wzp_relay::federation::FederationManager::new(
|
||||||
|
config.peers.clone(),
|
||||||
|
config.trusted.clone(),
|
||||||
|
global_room_set.clone(),
|
||||||
|
room_mgr.clone(),
|
||||||
|
endpoint.clone(),
|
||||||
|
tls_fp.clone(),
|
||||||
|
metrics.clone(),
|
||||||
|
event_log.clone(),
|
||||||
|
));
|
||||||
|
let fm_run = fm.clone();
|
||||||
|
tokio::spawn(async move { fm_run.run().await });
|
||||||
|
Some(fm)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
// Session manager — enforces max concurrent sessions
|
// Session manager — enforces max concurrent sessions
|
||||||
let session_mgr = Arc::new(Mutex::new(SessionManager::new(config.max_sessions)));
|
let session_mgr = Arc::new(Mutex::new(SessionManager::new(config.max_sessions)));
|
||||||
|
|
||||||
|
// Signal hub + call registry for direct 1:1 calls
|
||||||
|
let signal_hub = Arc::new(Mutex::new(wzp_relay::signal_hub::SignalHub::new()));
|
||||||
|
let call_registry = Arc::new(Mutex::new(wzp_relay::call_registry::CallRegistry::new()));
|
||||||
|
|
||||||
// Spawn inter-relay health probes via ProbeMesh coordinator
|
// Spawn inter-relay health probes via ProbeMesh coordinator
|
||||||
if !config.probe_targets.is_empty() {
|
if !config.probe_targets.is_empty() {
|
||||||
let mesh = wzp_relay::probe::ProbeMesh::new(
|
let mesh = wzp_relay::probe::ProbeMesh::new(
|
||||||
@@ -267,6 +462,15 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
} else {
|
} else {
|
||||||
info!("auth disabled — any client can connect (use --auth-url to enable)");
|
info!("auth disabled — any client can connect (use --auth-url to enable)");
|
||||||
}
|
}
|
||||||
|
if !config.global_rooms.is_empty() {
|
||||||
|
info!(count = config.global_rooms.len(), "global rooms configured");
|
||||||
|
for g in &config.global_rooms {
|
||||||
|
info!(name = %g.name, " global room");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if let Some(ref tap) = config.debug_tap {
|
||||||
|
info!(filter = %tap, "debug tap enabled — logging packet headers");
|
||||||
|
}
|
||||||
|
|
||||||
info!("Listening for connections...");
|
info!("Listening for connections...");
|
||||||
|
|
||||||
@@ -283,8 +487,13 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
let relay_seed_bytes = relay_seed.0;
|
let relay_seed_bytes = relay_seed.0;
|
||||||
let metrics = metrics.clone();
|
let metrics = metrics.clone();
|
||||||
let trunking_enabled = config.trunking_enabled;
|
let trunking_enabled = config.trunking_enabled;
|
||||||
|
let debug_tap = config.debug_tap.as_ref().map(|filter| room::DebugTap { room_filter: filter.clone() });
|
||||||
let presence = presence.clone();
|
let presence = presence.clone();
|
||||||
let route_resolver = route_resolver.clone();
|
let route_resolver = route_resolver.clone();
|
||||||
|
let federation_mgr = federation_mgr.clone();
|
||||||
|
let signal_hub = signal_hub.clone();
|
||||||
|
let call_registry = call_registry.clone();
|
||||||
|
let listen_addr_str = config.listen_addr.to_string();
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let addr = connection.remote_address();
|
let addr = connection.remote_address();
|
||||||
@@ -299,6 +508,23 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
|
|
||||||
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
|
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
|
||||||
|
|
||||||
|
// Ping connections: client just measures QUIC connect RTT.
|
||||||
|
if room_name == "ping" {
|
||||||
|
info!(%addr, "ping connection (RTT probe)");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Version query: respond with build hash over a uni stream.
|
||||||
|
if room_name == "version" {
|
||||||
|
if let Ok(mut send) = transport.connection().open_uni().await {
|
||||||
|
let _ = send.write_all(BUILD_GIT_HASH.as_bytes()).await;
|
||||||
|
let _ = send.finish();
|
||||||
|
// Wait for client to read before closing
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
// Probe connections use SNI "_probe" to identify themselves.
|
// Probe connections use SNI "_probe" to identify themselves.
|
||||||
// They skip auth + handshake and just do Ping->Pong + presence gossip.
|
// They skip auth + handshake and just do Ping->Pong + presence gossip.
|
||||||
if room_name == "_probe" {
|
if room_name == "_probe" {
|
||||||
@@ -385,6 +611,294 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Federation connections use SNI "_federation"
|
||||||
|
if room_name == "_federation" {
|
||||||
|
if let Some(ref fm) = federation_mgr {
|
||||||
|
// Wait for FederationHello to identify the connecting relay
|
||||||
|
let hello_fp = match tokio::time::timeout(
|
||||||
|
std::time::Duration::from_secs(5),
|
||||||
|
transport.recv_signal(),
|
||||||
|
).await {
|
||||||
|
Ok(Ok(Some(wzp_proto::SignalMessage::FederationHello { tls_fingerprint }))) => tls_fingerprint,
|
||||||
|
_ => {
|
||||||
|
warn!(%addr, "federation: no hello received, closing");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(label) = fm.check_inbound_trust(addr, &hello_fp) {
|
||||||
|
let peer_config = wzp_relay::config::PeerConfig {
|
||||||
|
url: addr.to_string(),
|
||||||
|
fingerprint: hello_fp,
|
||||||
|
label: Some(label.clone()),
|
||||||
|
};
|
||||||
|
let fm = fm.clone();
|
||||||
|
info!(%addr, label = %label, "inbound federation accepted (trusted)");
|
||||||
|
fm.handle_inbound(transport, peer_config).await;
|
||||||
|
} else {
|
||||||
|
warn!(%addr, fp = %hello_fp, "unknown relay wants to federate");
|
||||||
|
info!(" to accept, add to relay.toml:");
|
||||||
|
info!(" [[trusted]]");
|
||||||
|
info!(" fingerprint = \"{hello_fp}\"");
|
||||||
|
info!(" label = \"Relay at {addr}\"");
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
info!(%addr, "federation connection rejected (no federation configured)");
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Direct calling: persistent signaling connection
|
||||||
|
if room_name == "_signal" {
|
||||||
|
info!(%addr, "signal connection");
|
||||||
|
|
||||||
|
// Optional auth
|
||||||
|
let auth_fp: Option<String> = if let Some(ref url) = auth_url {
|
||||||
|
match transport.recv_signal().await {
|
||||||
|
Ok(Some(SignalMessage::AuthToken { token })) => {
|
||||||
|
match wzp_relay::auth::validate_token(url, &token).await {
|
||||||
|
Ok(client) => Some(client.fingerprint),
|
||||||
|
Err(e) => {
|
||||||
|
error!(%addr, "signal auth failed: {e}");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => { warn!(%addr, "signal: expected AuthToken"); return; }
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
// Wait for RegisterPresence
|
||||||
|
let (client_fp, client_alias) = match tokio::time::timeout(
|
||||||
|
std::time::Duration::from_secs(10),
|
||||||
|
transport.recv_signal(),
|
||||||
|
).await {
|
||||||
|
Ok(Ok(Some(SignalMessage::RegisterPresence { identity_pub, signature: _, alias }))) => {
|
||||||
|
// Compute fingerprint: SHA-256(Ed25519 pub key)[:16], same as Fingerprint type
|
||||||
|
let fp = {
|
||||||
|
use sha2::{Sha256, Digest};
|
||||||
|
let hash = Sha256::digest(&identity_pub);
|
||||||
|
let fingerprint = wzp_crypto::Fingerprint([
|
||||||
|
hash[0], hash[1], hash[2], hash[3], hash[4], hash[5], hash[6], hash[7],
|
||||||
|
hash[8], hash[9], hash[10], hash[11], hash[12], hash[13], hash[14], hash[15],
|
||||||
|
]);
|
||||||
|
fingerprint.to_string()
|
||||||
|
};
|
||||||
|
let fp = auth_fp.unwrap_or(fp);
|
||||||
|
(fp, alias)
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
warn!(%addr, "signal: no RegisterPresence received");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Register in signal hub + presence
|
||||||
|
{
|
||||||
|
let mut hub = signal_hub.lock().await;
|
||||||
|
hub.register(client_fp.clone(), transport.clone(), client_alias.clone());
|
||||||
|
}
|
||||||
|
{
|
||||||
|
let mut reg = presence.lock().await;
|
||||||
|
reg.register_local(&client_fp, client_alias.clone(), None);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send ack
|
||||||
|
let _ = transport.send_signal(&SignalMessage::RegisterPresenceAck {
|
||||||
|
success: true,
|
||||||
|
error: None,
|
||||||
|
}).await;
|
||||||
|
|
||||||
|
info!(%addr, fingerprint = %client_fp, alias = ?client_alias, "signal client registered");
|
||||||
|
|
||||||
|
// Signal recv loop
|
||||||
|
loop {
|
||||||
|
match transport.recv_signal().await {
|
||||||
|
Ok(Some(msg)) => {
|
||||||
|
match msg {
|
||||||
|
SignalMessage::DirectCallOffer { ref target_fingerprint, ref call_id, ref caller_alias, .. } => {
|
||||||
|
let target_fp = target_fingerprint.clone();
|
||||||
|
let call_id = call_id.clone();
|
||||||
|
|
||||||
|
// Check if target is online
|
||||||
|
let online = {
|
||||||
|
let hub = signal_hub.lock().await;
|
||||||
|
hub.is_online(&target_fp)
|
||||||
|
};
|
||||||
|
if !online {
|
||||||
|
info!(%addr, target = %target_fp, "call target not online");
|
||||||
|
let _ = transport.send_signal(&SignalMessage::Hangup {
|
||||||
|
reason: wzp_proto::HangupReason::Normal,
|
||||||
|
}).await;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create call in registry
|
||||||
|
{
|
||||||
|
let mut reg = call_registry.lock().await;
|
||||||
|
reg.create_call(call_id.clone(), client_fp.clone(), target_fp.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Forward offer to callee
|
||||||
|
info!(caller = %client_fp, callee = %target_fp, call_id = %call_id, "routing direct call offer");
|
||||||
|
let hub = signal_hub.lock().await;
|
||||||
|
if let Err(e) = hub.send_to(&target_fp, &msg).await {
|
||||||
|
warn!("failed to forward call offer: {e}");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send ringing to caller
|
||||||
|
drop(hub);
|
||||||
|
let _ = transport.send_signal(&SignalMessage::CallRinging {
|
||||||
|
call_id: call_id.clone(),
|
||||||
|
}).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
SignalMessage::DirectCallAnswer { ref call_id, ref accept_mode, .. } => {
|
||||||
|
let call_id = call_id.clone();
|
||||||
|
let mode = *accept_mode;
|
||||||
|
|
||||||
|
let peer_fp = {
|
||||||
|
let reg = call_registry.lock().await;
|
||||||
|
reg.peer_fingerprint(&call_id, &client_fp).map(|s| s.to_string())
|
||||||
|
};
|
||||||
|
|
||||||
|
let Some(peer_fp) = peer_fp else {
|
||||||
|
warn!(call_id = %call_id, "answer for unknown call");
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
|
||||||
|
if mode == wzp_proto::CallAcceptMode::Reject {
|
||||||
|
info!(call_id = %call_id, "call rejected");
|
||||||
|
let mut reg = call_registry.lock().await;
|
||||||
|
reg.end_call(&call_id);
|
||||||
|
drop(reg);
|
||||||
|
let hub = signal_hub.lock().await;
|
||||||
|
let _ = hub.send_to(&peer_fp, &SignalMessage::Hangup {
|
||||||
|
reason: wzp_proto::HangupReason::Normal,
|
||||||
|
}).await;
|
||||||
|
} else {
|
||||||
|
// Accept — create private room
|
||||||
|
let room = format!("call-{call_id}");
|
||||||
|
{
|
||||||
|
let mut reg = call_registry.lock().await;
|
||||||
|
reg.set_active(&call_id, mode, room.clone());
|
||||||
|
}
|
||||||
|
info!(call_id = %call_id, room = %room, mode = ?mode, "call accepted, creating room");
|
||||||
|
|
||||||
|
// Forward answer to caller
|
||||||
|
{
|
||||||
|
let hub = signal_hub.lock().await;
|
||||||
|
let _ = hub.send_to(&peer_fp, &msg).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send CallSetup to both parties
|
||||||
|
// Use the address the client connected to (their remote addr
|
||||||
|
// is our perspective, but we need our listen addr).
|
||||||
|
// Replace 0.0.0.0 with the client's destination IP.
|
||||||
|
let relay_addr_for_setup = if listen_addr_str.starts_with("0.0.0.0:") {
|
||||||
|
let port = &listen_addr_str[8..];
|
||||||
|
// Use the local IP from the client's connection
|
||||||
|
let local_ip = addr.ip();
|
||||||
|
if local_ip.is_loopback() {
|
||||||
|
format!("127.0.0.1:{port}")
|
||||||
|
} else {
|
||||||
|
format!("{local_ip}:{port}")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
listen_addr_str.clone()
|
||||||
|
};
|
||||||
|
let setup = SignalMessage::CallSetup {
|
||||||
|
call_id: call_id.clone(),
|
||||||
|
room: room.clone(),
|
||||||
|
relay_addr: relay_addr_for_setup,
|
||||||
|
};
|
||||||
|
{
|
||||||
|
let hub = signal_hub.lock().await;
|
||||||
|
let _ = hub.send_to(&peer_fp, &setup).await;
|
||||||
|
let _ = hub.send_to(&client_fp, &setup).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
SignalMessage::Hangup { .. } => {
|
||||||
|
// Forward hangup to all active calls for this user
|
||||||
|
let calls = {
|
||||||
|
let reg = call_registry.lock().await;
|
||||||
|
reg.calls_for_fingerprint(&client_fp)
|
||||||
|
.iter()
|
||||||
|
.map(|c| (c.call_id.clone(), if c.caller_fingerprint == client_fp {
|
||||||
|
c.callee_fingerprint.clone()
|
||||||
|
} else {
|
||||||
|
c.caller_fingerprint.clone()
|
||||||
|
}))
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
};
|
||||||
|
for (call_id, peer_fp) in &calls {
|
||||||
|
let hub = signal_hub.lock().await;
|
||||||
|
let _ = hub.send_to(peer_fp, &msg).await;
|
||||||
|
drop(hub);
|
||||||
|
let mut reg = call_registry.lock().await;
|
||||||
|
reg.end_call(call_id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
SignalMessage::Ping { timestamp_ms } => {
|
||||||
|
let _ = transport.send_signal(&SignalMessage::Pong { timestamp_ms }).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
other => {
|
||||||
|
warn!(%addr, "signal: unexpected message: {:?}", std::mem::discriminant(&other));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(None) => {
|
||||||
|
info!(%addr, "signal connection closed");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(%addr, "signal recv error: {e}");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup: unregister + end active calls
|
||||||
|
let active_calls = {
|
||||||
|
let reg = call_registry.lock().await;
|
||||||
|
reg.calls_for_fingerprint(&client_fp)
|
||||||
|
.iter()
|
||||||
|
.map(|c| (c.call_id.clone(), if c.caller_fingerprint == client_fp {
|
||||||
|
c.callee_fingerprint.clone()
|
||||||
|
} else {
|
||||||
|
c.caller_fingerprint.clone()
|
||||||
|
}))
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
};
|
||||||
|
for (call_id, peer_fp) in &active_calls {
|
||||||
|
let hub = signal_hub.lock().await;
|
||||||
|
let _ = hub.send_to(peer_fp, &SignalMessage::Hangup {
|
||||||
|
reason: wzp_proto::HangupReason::Normal,
|
||||||
|
}).await;
|
||||||
|
drop(hub);
|
||||||
|
let mut reg = call_registry.lock().await;
|
||||||
|
reg.end_call(call_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
let mut hub = signal_hub.lock().await;
|
||||||
|
hub.unregister(&client_fp);
|
||||||
|
}
|
||||||
|
{
|
||||||
|
let mut reg = presence.lock().await;
|
||||||
|
reg.unregister_local(&client_fp);
|
||||||
|
}
|
||||||
|
|
||||||
|
transport.close().await.ok();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
// Auth check: if --auth-url is set, expect first signal message to be a token
|
// Auth check: if --auth-url is set, expect first signal message to be a token
|
||||||
// Auth: if --auth-url is set, expect AuthToken as first signal
|
// Auth: if --auth-url is set, expect AuthToken as first signal
|
||||||
let authenticated_fp: Option<String> = if let Some(ref url) = auth_url {
|
let authenticated_fp: Option<String> = if let Some(ref url) = auth_url {
|
||||||
@@ -451,6 +965,28 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
// Use the caller's identity fingerprint from the handshake
|
// Use the caller's identity fingerprint from the handshake
|
||||||
let participant_fp = authenticated_fp.clone().unwrap_or(caller_fp);
|
let participant_fp = authenticated_fp.clone().unwrap_or(caller_fp);
|
||||||
|
|
||||||
|
// ACL: call rooms (call-*) are restricted to the two authorized participants.
|
||||||
|
// Only the relay's call orchestrator creates these rooms — random clients can't join.
|
||||||
|
if room_name.starts_with("call-") {
|
||||||
|
let call_id = &room_name[5..]; // strip "call-" prefix
|
||||||
|
let authorized = {
|
||||||
|
let reg = call_registry.lock().await;
|
||||||
|
match reg.get(call_id) {
|
||||||
|
Some(call) => {
|
||||||
|
call.caller_fingerprint == participant_fp
|
||||||
|
|| call.callee_fingerprint == participant_fp
|
||||||
|
}
|
||||||
|
None => false, // unknown call — reject
|
||||||
|
}
|
||||||
|
};
|
||||||
|
if !authorized {
|
||||||
|
warn!(%addr, room = %room_name, fp = %participant_fp, "rejected: not authorized for this call room");
|
||||||
|
transport.close().await.ok();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
info!(%addr, room = %room_name, fp = %participant_fp, "authorized for call room");
|
||||||
|
}
|
||||||
|
|
||||||
// Register in presence registry
|
// Register in presence registry
|
||||||
{
|
{
|
||||||
let mut reg = presence.lock().await;
|
let mut reg = presence.lock().await;
|
||||||
@@ -503,6 +1039,20 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
|
|
||||||
metrics.active_sessions.inc();
|
metrics.active_sessions.inc();
|
||||||
|
|
||||||
|
// Call rooms: enforce 2-participant limit
|
||||||
|
if room_name.starts_with("call-") {
|
||||||
|
let mgr = room_mgr.lock().await;
|
||||||
|
if mgr.room_size(&room_name) >= 2 {
|
||||||
|
drop(mgr);
|
||||||
|
warn!(%addr, room = %room_name, "call room full (max 2 participants)");
|
||||||
|
metrics.active_sessions.dec();
|
||||||
|
let mut smgr = session_mgr.lock().await;
|
||||||
|
smgr.remove_session(session_id);
|
||||||
|
transport.close().await.ok();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let participant_id = {
|
let participant_id = {
|
||||||
let mut mgr = room_mgr.lock().await;
|
let mut mgr = room_mgr.lock().await;
|
||||||
match mgr.join(
|
match mgr.join(
|
||||||
@@ -515,7 +1065,25 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
Ok((id, update, senders)) => {
|
Ok((id, update, senders)) => {
|
||||||
metrics.active_rooms.set(mgr.list().len() as i64);
|
metrics.active_rooms.set(mgr.list().len() as i64);
|
||||||
drop(mgr); // release lock before async broadcast
|
drop(mgr); // release lock before async broadcast
|
||||||
room::broadcast_signal(&senders, &update).await;
|
|
||||||
|
// Merge federated participants into RoomUpdate if this is a global room
|
||||||
|
let merged_update = if let Some(ref fm) = federation_mgr {
|
||||||
|
if fm.is_global_room(&room_name) {
|
||||||
|
if let SignalMessage::RoomUpdate { count: _, participants: mut local_parts } = update {
|
||||||
|
let remote = fm.get_remote_participants(&room_name).await;
|
||||||
|
local_parts.extend(remote);
|
||||||
|
// Deduplicate by fingerprint
|
||||||
|
let mut seen = std::collections::HashSet::new();
|
||||||
|
local_parts.retain(|p| seen.insert(p.fingerprint.clone()));
|
||||||
|
SignalMessage::RoomUpdate {
|
||||||
|
count: local_parts.len() as u32,
|
||||||
|
participants: local_parts,
|
||||||
|
}
|
||||||
|
} else { update }
|
||||||
|
} else { update }
|
||||||
|
} else { update };
|
||||||
|
|
||||||
|
room::broadcast_signal(&senders, &merged_update).await;
|
||||||
id
|
id
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
@@ -533,6 +1101,25 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
.iter()
|
.iter()
|
||||||
.map(|b| format!("{b:02x}"))
|
.map(|b| format!("{b:02x}"))
|
||||||
.collect();
|
.collect();
|
||||||
|
// Set up federation media channel if this is a global room
|
||||||
|
let (federation_tx, federation_room_hash) = if let Some(ref fm) = federation_mgr {
|
||||||
|
let is_global = fm.is_global_room(&room_name);
|
||||||
|
if is_global {
|
||||||
|
let canonical_hash = fm.global_room_hash(&room_name);
|
||||||
|
let (tx, rx) = tokio::sync::mpsc::channel(256);
|
||||||
|
let fm_clone = fm.clone();
|
||||||
|
tokio::spawn(async move {
|
||||||
|
wzp_relay::federation::run_federation_media_egress(fm_clone, rx).await;
|
||||||
|
});
|
||||||
|
info!(room = %room_name, canonical = ?fm.resolve_global_room(&room_name), "federation egress created (global room)");
|
||||||
|
(Some(tx), Some(canonical_hash))
|
||||||
|
} else {
|
||||||
|
(None, None)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
(None, None)
|
||||||
|
};
|
||||||
|
|
||||||
room::run_participant(
|
room::run_participant(
|
||||||
room_mgr.clone(),
|
room_mgr.clone(),
|
||||||
room_name,
|
room_name,
|
||||||
@@ -541,6 +1128,9 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
metrics.clone(),
|
metrics.clone(),
|
||||||
&session_id_str,
|
&session_id_str,
|
||||||
trunking_enabled,
|
trunking_enabled,
|
||||||
|
debug_tap,
|
||||||
|
federation_tx,
|
||||||
|
federation_room_hash,
|
||||||
).await;
|
).await;
|
||||||
|
|
||||||
// Participant disconnected — clean up presence + per-session metrics
|
// Participant disconnected — clean up presence + per-session metrics
|
||||||
|
|||||||
@@ -16,6 +16,13 @@ pub struct RelayMetrics {
|
|||||||
pub bytes_forwarded: IntCounter,
|
pub bytes_forwarded: IntCounter,
|
||||||
pub auth_attempts: IntCounterVec,
|
pub auth_attempts: IntCounterVec,
|
||||||
pub handshake_duration: Histogram,
|
pub handshake_duration: Histogram,
|
||||||
|
// Federation metrics
|
||||||
|
pub federation_peer_status: IntGaugeVec,
|
||||||
|
pub federation_peer_rtt_ms: GaugeVec,
|
||||||
|
pub federation_packets_forwarded: IntCounterVec,
|
||||||
|
pub federation_packets_deduped: IntCounter,
|
||||||
|
pub federation_packets_rate_limited: IntCounter,
|
||||||
|
pub federation_active_rooms: IntGauge,
|
||||||
// Per-session metrics
|
// Per-session metrics
|
||||||
pub session_buffer_depth: IntGaugeVec,
|
pub session_buffer_depth: IntGaugeVec,
|
||||||
pub session_loss_pct: GaugeVec,
|
pub session_loss_pct: GaugeVec,
|
||||||
@@ -60,6 +67,28 @@ impl RelayMetrics {
|
|||||||
)
|
)
|
||||||
.expect("metric");
|
.expect("metric");
|
||||||
|
|
||||||
|
let federation_peer_status = IntGaugeVec::new(
|
||||||
|
Opts::new("wzp_federation_peer_status", "Peer connection status (0=disconnected, 1=connected)"),
|
||||||
|
&["peer"],
|
||||||
|
).expect("metric");
|
||||||
|
let federation_peer_rtt_ms = GaugeVec::new(
|
||||||
|
Opts::new("wzp_federation_peer_rtt_ms", "QUIC RTT to federated peer in milliseconds"),
|
||||||
|
&["peer"],
|
||||||
|
).expect("metric");
|
||||||
|
let federation_packets_forwarded = IntCounterVec::new(
|
||||||
|
Opts::new("wzp_federation_packets_forwarded_total", "Packets forwarded to/from federated peers"),
|
||||||
|
&["peer", "direction"],
|
||||||
|
).expect("metric");
|
||||||
|
let federation_packets_deduped = IntCounter::with_opts(
|
||||||
|
Opts::new("wzp_federation_packets_deduped_total", "Duplicate federation packets dropped"),
|
||||||
|
).expect("metric");
|
||||||
|
let federation_packets_rate_limited = IntCounter::with_opts(
|
||||||
|
Opts::new("wzp_federation_packets_rate_limited_total", "Federation packets dropped by rate limiter"),
|
||||||
|
).expect("metric");
|
||||||
|
let federation_active_rooms = IntGauge::with_opts(
|
||||||
|
Opts::new("wzp_federation_active_rooms", "Number of federated rooms currently active"),
|
||||||
|
).expect("metric");
|
||||||
|
|
||||||
let session_buffer_depth = IntGaugeVec::new(
|
let session_buffer_depth = IntGaugeVec::new(
|
||||||
Opts::new(
|
Opts::new(
|
||||||
"wzp_relay_session_jitter_buffer_depth",
|
"wzp_relay_session_jitter_buffer_depth",
|
||||||
@@ -107,6 +136,12 @@ impl RelayMetrics {
|
|||||||
registry.register(Box::new(bytes_forwarded.clone())).expect("register");
|
registry.register(Box::new(bytes_forwarded.clone())).expect("register");
|
||||||
registry.register(Box::new(auth_attempts.clone())).expect("register");
|
registry.register(Box::new(auth_attempts.clone())).expect("register");
|
||||||
registry.register(Box::new(handshake_duration.clone())).expect("register");
|
registry.register(Box::new(handshake_duration.clone())).expect("register");
|
||||||
|
registry.register(Box::new(federation_peer_status.clone())).expect("register");
|
||||||
|
registry.register(Box::new(federation_peer_rtt_ms.clone())).expect("register");
|
||||||
|
registry.register(Box::new(federation_packets_forwarded.clone())).expect("register");
|
||||||
|
registry.register(Box::new(federation_packets_deduped.clone())).expect("register");
|
||||||
|
registry.register(Box::new(federation_packets_rate_limited.clone())).expect("register");
|
||||||
|
registry.register(Box::new(federation_active_rooms.clone())).expect("register");
|
||||||
registry.register(Box::new(session_buffer_depth.clone())).expect("register");
|
registry.register(Box::new(session_buffer_depth.clone())).expect("register");
|
||||||
registry.register(Box::new(session_loss_pct.clone())).expect("register");
|
registry.register(Box::new(session_loss_pct.clone())).expect("register");
|
||||||
registry.register(Box::new(session_rtt_ms.clone())).expect("register");
|
registry.register(Box::new(session_rtt_ms.clone())).expect("register");
|
||||||
@@ -120,6 +155,12 @@ impl RelayMetrics {
|
|||||||
bytes_forwarded,
|
bytes_forwarded,
|
||||||
auth_attempts,
|
auth_attempts,
|
||||||
handshake_duration,
|
handshake_duration,
|
||||||
|
federation_peer_status,
|
||||||
|
federation_peer_rtt_ms,
|
||||||
|
federation_packets_forwarded,
|
||||||
|
federation_packets_deduped,
|
||||||
|
federation_packets_rate_limited,
|
||||||
|
federation_active_rooms,
|
||||||
session_buffer_depth,
|
session_buffer_depth,
|
||||||
session_loss_pct,
|
session_loss_pct,
|
||||||
session_rtt_ms,
|
session_rtt_ms,
|
||||||
|
|||||||
@@ -18,6 +18,38 @@ use wzp_proto::MediaTransport;
|
|||||||
use crate::metrics::RelayMetrics;
|
use crate::metrics::RelayMetrics;
|
||||||
use crate::trunk::TrunkBatcher;
|
use crate::trunk::TrunkBatcher;
|
||||||
|
|
||||||
|
/// Debug tap: logs packet metadata for matching rooms.
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct DebugTap {
|
||||||
|
/// Room name filter ("*" = all rooms, or specific room name/hash).
|
||||||
|
pub room_filter: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DebugTap {
|
||||||
|
pub fn matches(&self, room_name: &str) -> bool {
|
||||||
|
self.room_filter == "*" || self.room_filter == room_name
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn log_packet(&self, room: &str, dir: &str, addr: &std::net::SocketAddr, pkt: &wzp_proto::MediaPacket, fan_out: usize) {
|
||||||
|
let h = &pkt.header;
|
||||||
|
info!(
|
||||||
|
target: "debug_tap",
|
||||||
|
room = %room,
|
||||||
|
dir = dir,
|
||||||
|
addr = %addr,
|
||||||
|
seq = h.seq,
|
||||||
|
codec = ?h.codec_id,
|
||||||
|
ts = h.timestamp,
|
||||||
|
fec_block = h.fec_block,
|
||||||
|
fec_sym = h.fec_symbol,
|
||||||
|
repair = h.is_repair,
|
||||||
|
len = pkt.payload.len(),
|
||||||
|
fan_out,
|
||||||
|
"TAP"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Unique participant ID within a room.
|
/// Unique participant ID within a room.
|
||||||
pub type ParticipantId = u64;
|
pub type ParticipantId = u64;
|
||||||
|
|
||||||
@@ -27,6 +59,22 @@ fn next_id() -> ParticipantId {
|
|||||||
NEXT_PARTICIPANT_ID.fetch_add(1, Ordering::Relaxed)
|
NEXT_PARTICIPANT_ID.fetch_add(1, Ordering::Relaxed)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Events emitted by RoomManager for federation to observe.
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub enum RoomEvent {
|
||||||
|
/// First local participant joined this room.
|
||||||
|
LocalJoin { room: String },
|
||||||
|
/// Last local participant left this room.
|
||||||
|
LocalLeave { room: String },
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Outbound federation media from a local participant.
|
||||||
|
pub struct FederationMediaOut {
|
||||||
|
pub room_name: String,
|
||||||
|
pub room_hash: [u8; 8],
|
||||||
|
pub data: Bytes,
|
||||||
|
}
|
||||||
|
|
||||||
/// How to send data to a participant — either via QUIC transport or WebSocket channel.
|
/// How to send data to a participant — either via QUIC transport or WebSocket channel.
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
pub enum ParticipantSender {
|
pub enum ParticipantSender {
|
||||||
@@ -132,6 +180,7 @@ impl Room {
|
|||||||
.map(|p| wzp_proto::packet::RoomParticipant {
|
.map(|p| wzp_proto::packet::RoomParticipant {
|
||||||
fingerprint: p.fingerprint.clone().unwrap_or_default(),
|
fingerprint: p.fingerprint.clone().unwrap_or_default(),
|
||||||
alias: p.alias.clone(),
|
alias: p.alias.clone(),
|
||||||
|
relay_label: None, // local participant
|
||||||
})
|
})
|
||||||
.collect()
|
.collect()
|
||||||
}
|
}
|
||||||
@@ -157,24 +206,35 @@ pub struct RoomManager {
|
|||||||
/// When `None`, rooms are open (no auth mode). When `Some`, only listed
|
/// When `None`, rooms are open (no auth mode). When `Some`, only listed
|
||||||
/// fingerprints can join the corresponding room.
|
/// fingerprints can join the corresponding room.
|
||||||
acl: Option<HashMap<String, HashSet<String>>>,
|
acl: Option<HashMap<String, HashSet<String>>>,
|
||||||
|
/// Channel for room lifecycle events (federation subscribes).
|
||||||
|
event_tx: tokio::sync::broadcast::Sender<RoomEvent>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl RoomManager {
|
impl RoomManager {
|
||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
|
let (event_tx, _) = tokio::sync::broadcast::channel(64);
|
||||||
Self {
|
Self {
|
||||||
rooms: HashMap::new(),
|
rooms: HashMap::new(),
|
||||||
acl: None,
|
acl: None,
|
||||||
|
event_tx,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create a room manager with ACL enforcement enabled.
|
/// Create a room manager with ACL enforcement enabled.
|
||||||
pub fn with_acl() -> Self {
|
pub fn with_acl() -> Self {
|
||||||
|
let (event_tx, _) = tokio::sync::broadcast::channel(64);
|
||||||
Self {
|
Self {
|
||||||
rooms: HashMap::new(),
|
rooms: HashMap::new(),
|
||||||
acl: Some(HashMap::new()),
|
acl: Some(HashMap::new()),
|
||||||
|
event_tx,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Subscribe to room lifecycle events (for federation).
|
||||||
|
pub fn subscribe_events(&self) -> tokio::sync::broadcast::Receiver<RoomEvent> {
|
||||||
|
self.event_tx.subscribe()
|
||||||
|
}
|
||||||
|
|
||||||
/// Grant a fingerprint access to a room.
|
/// Grant a fingerprint access to a room.
|
||||||
pub fn allow(&mut self, room_name: &str, fingerprint: &str) {
|
pub fn allow(&mut self, room_name: &str, fingerprint: &str) {
|
||||||
if let Some(ref mut acl) = self.acl {
|
if let Some(ref mut acl) = self.acl {
|
||||||
@@ -213,8 +273,13 @@ impl RoomManager {
|
|||||||
warn!(room = room_name, fingerprint = ?fingerprint, "unauthorized room join attempt");
|
warn!(room = room_name, fingerprint = ?fingerprint, "unauthorized room join attempt");
|
||||||
return Err("not authorized for this room".to_string());
|
return Err("not authorized for this room".to_string());
|
||||||
}
|
}
|
||||||
|
let was_empty = !self.rooms.contains_key(room_name)
|
||||||
|
|| self.rooms.get(room_name).map_or(true, |r| r.is_empty());
|
||||||
let room = self.rooms.entry(room_name.to_string()).or_insert_with(Room::new);
|
let room = self.rooms.entry(room_name.to_string()).or_insert_with(Room::new);
|
||||||
let id = room.add(addr, sender, fingerprint.map(|s| s.to_string()), alias.map(|s| s.to_string()));
|
let id = room.add(addr, sender, fingerprint.map(|s| s.to_string()), alias.map(|s| s.to_string()));
|
||||||
|
if was_empty {
|
||||||
|
let _ = self.event_tx.send(RoomEvent::LocalJoin { room: room_name.to_string() });
|
||||||
|
}
|
||||||
let update = wzp_proto::SignalMessage::RoomUpdate {
|
let update = wzp_proto::SignalMessage::RoomUpdate {
|
||||||
count: room.len() as u32,
|
count: room.len() as u32,
|
||||||
participants: room.participant_list(),
|
participants: room.participant_list(),
|
||||||
@@ -235,12 +300,34 @@ impl RoomManager {
|
|||||||
Ok(id)
|
Ok(id)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Get list of active room names.
|
||||||
|
pub fn active_rooms(&self) -> Vec<String> {
|
||||||
|
self.rooms.keys().cloned().collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get participant list for a room (fingerprint + alias).
|
||||||
|
pub fn local_participant_list(&self, room_name: &str) -> Vec<wzp_proto::packet::RoomParticipant> {
|
||||||
|
self.rooms.get(room_name)
|
||||||
|
.map(|room| room.participant_list())
|
||||||
|
.unwrap_or_default()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get all senders for participants in a room (for federation inbound media delivery).
|
||||||
|
pub fn local_senders(&self, room_name: &str) -> Vec<ParticipantSender> {
|
||||||
|
self.rooms.get(room_name)
|
||||||
|
.map(|room| room.participants.iter()
|
||||||
|
.map(|p| p.sender.clone())
|
||||||
|
.collect())
|
||||||
|
.unwrap_or_default()
|
||||||
|
}
|
||||||
|
|
||||||
/// Leave a room. Returns (room_update_msg, remaining_senders) for broadcasting, or None if room is now empty.
|
/// Leave a room. Returns (room_update_msg, remaining_senders) for broadcasting, or None if room is now empty.
|
||||||
pub fn leave(&mut self, room_name: &str, participant_id: ParticipantId) -> Option<(wzp_proto::SignalMessage, Vec<ParticipantSender>)> {
|
pub fn leave(&mut self, room_name: &str, participant_id: ParticipantId) -> Option<(wzp_proto::SignalMessage, Vec<ParticipantSender>)> {
|
||||||
if let Some(room) = self.rooms.get_mut(room_name) {
|
if let Some(room) = self.rooms.get_mut(room_name) {
|
||||||
room.remove(participant_id);
|
room.remove(participant_id);
|
||||||
if room.is_empty() {
|
if room.is_empty() {
|
||||||
self.rooms.remove(room_name);
|
self.rooms.remove(room_name);
|
||||||
|
let _ = self.event_tx.send(RoomEvent::LocalLeave { room: room_name.to_string() });
|
||||||
info!(room = room_name, "room closed (empty)");
|
info!(room = room_name, "room closed (empty)");
|
||||||
return None;
|
return None;
|
||||||
}
|
}
|
||||||
@@ -350,6 +437,9 @@ pub async fn run_participant(
|
|||||||
metrics: Arc<RelayMetrics>,
|
metrics: Arc<RelayMetrics>,
|
||||||
session_id: &str,
|
session_id: &str,
|
||||||
trunking_enabled: bool,
|
trunking_enabled: bool,
|
||||||
|
debug_tap: Option<DebugTap>,
|
||||||
|
federation_tx: Option<tokio::sync::mpsc::Sender<FederationMediaOut>>,
|
||||||
|
federation_room_hash: Option<[u8; 8]>,
|
||||||
) {
|
) {
|
||||||
if trunking_enabled {
|
if trunking_enabled {
|
||||||
run_participant_trunked(
|
run_participant_trunked(
|
||||||
@@ -358,7 +448,7 @@ pub async fn run_participant(
|
|||||||
.await;
|
.await;
|
||||||
} else {
|
} else {
|
||||||
run_participant_plain(
|
run_participant_plain(
|
||||||
room_mgr, room_name, participant_id, transport, metrics, session_id,
|
room_mgr, room_name, participant_id, transport, metrics, session_id, debug_tap, federation_tx, federation_room_hash,
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
}
|
}
|
||||||
@@ -372,6 +462,9 @@ async fn run_participant_plain(
|
|||||||
transport: Arc<wzp_transport::QuinnTransport>,
|
transport: Arc<wzp_transport::QuinnTransport>,
|
||||||
metrics: Arc<RelayMetrics>,
|
metrics: Arc<RelayMetrics>,
|
||||||
session_id: &str,
|
session_id: &str,
|
||||||
|
debug_tap: Option<DebugTap>,
|
||||||
|
federation_tx: Option<tokio::sync::mpsc::Sender<FederationMediaOut>>,
|
||||||
|
federation_room_hash: Option<[u8; 8]>,
|
||||||
) {
|
) {
|
||||||
let addr = transport.connection().remote_address();
|
let addr = transport.connection().remote_address();
|
||||||
let mut packets_forwarded = 0u64;
|
let mut packets_forwarded = 0u64;
|
||||||
@@ -445,6 +538,13 @@ async fn run_participant_plain(
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Debug tap: log packet metadata
|
||||||
|
if let Some(ref tap) = debug_tap {
|
||||||
|
if tap.matches(&room_name) {
|
||||||
|
tap.log_packet(&room_name, "in", &addr, &pkt, others.len());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Forward to all others
|
// Forward to all others
|
||||||
let fwd_start = std::time::Instant::now();
|
let fwd_start = std::time::Instant::now();
|
||||||
let pkt_bytes = pkt.payload.len() as u64;
|
let pkt_bytes = pkt.payload.len() as u64;
|
||||||
@@ -469,6 +569,17 @@ async fn run_participant_plain(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Federation: forward to active peer relays via channel
|
||||||
|
if let Some(ref fed_tx) = federation_tx {
|
||||||
|
let data = pkt.to_bytes();
|
||||||
|
let _ = fed_tx.try_send(FederationMediaOut {
|
||||||
|
room_name: room_name.clone(),
|
||||||
|
room_hash: federation_room_hash.unwrap_or_else(|| crate::federation::room_hash(&room_name)),
|
||||||
|
data,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
let fwd_ms = fwd_start.elapsed().as_millis() as u64;
|
let fwd_ms = fwd_start.elapsed().as_millis() as u64;
|
||||||
if fwd_ms > max_forward_ms {
|
if fwd_ms > max_forward_ms {
|
||||||
max_forward_ms = fwd_ms;
|
max_forward_ms = fwd_ms;
|
||||||
|
|||||||
105
crates/wzp-relay/src/signal_hub.rs
Normal file
105
crates/wzp-relay/src/signal_hub.rs
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
//! Persistent signaling connection manager.
|
||||||
|
//!
|
||||||
|
//! Tracks clients connected via `_signal` SNI. Routes call signals
|
||||||
|
//! (DirectCallOffer, DirectCallAnswer, Hangup) between registered users.
|
||||||
|
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::Instant;
|
||||||
|
|
||||||
|
use tracing::{info, warn};
|
||||||
|
use wzp_proto::{MediaTransport, SignalMessage};
|
||||||
|
use wzp_transport::QuinnTransport;
|
||||||
|
|
||||||
|
/// A client connected via `_signal` for direct calling.
|
||||||
|
pub struct SignalClient {
|
||||||
|
pub fingerprint: String,
|
||||||
|
pub alias: Option<String>,
|
||||||
|
pub transport: Arc<QuinnTransport>,
|
||||||
|
pub connected_at: Instant,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Manages persistent signaling connections.
|
||||||
|
pub struct SignalHub {
|
||||||
|
clients: HashMap<String, SignalClient>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SignalHub {
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self {
|
||||||
|
clients: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Register a new signaling client.
|
||||||
|
pub fn register(&mut self, fp: String, transport: Arc<QuinnTransport>, alias: Option<String>) {
|
||||||
|
info!(fingerprint = %fp, alias = ?alias, "signal client registered");
|
||||||
|
self.clients.insert(fp.clone(), SignalClient {
|
||||||
|
fingerprint: fp,
|
||||||
|
alias,
|
||||||
|
transport,
|
||||||
|
connected_at: Instant::now(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Unregister a signaling client. Returns the client if found.
|
||||||
|
pub fn unregister(&mut self, fp: &str) -> Option<SignalClient> {
|
||||||
|
let client = self.clients.remove(fp);
|
||||||
|
if client.is_some() {
|
||||||
|
info!(fingerprint = %fp, "signal client unregistered");
|
||||||
|
}
|
||||||
|
client
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Look up a client by fingerprint.
|
||||||
|
pub fn get(&self, fp: &str) -> Option<&SignalClient> {
|
||||||
|
self.clients.get(fp)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a fingerprint is online.
|
||||||
|
pub fn is_online(&self, fp: &str) -> bool {
|
||||||
|
self.clients.contains_key(fp)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Send a signal message to a client by fingerprint.
|
||||||
|
pub async fn send_to(&self, fp: &str, msg: &SignalMessage) -> Result<(), String> {
|
||||||
|
match self.clients.get(fp) {
|
||||||
|
Some(client) => {
|
||||||
|
client.transport.send_signal(msg).await
|
||||||
|
.map_err(|e| format!("send to {fp}: {e}"))
|
||||||
|
}
|
||||||
|
None => Err(format!("{fp} not online")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Number of connected signaling clients.
|
||||||
|
pub fn online_count(&self) -> usize {
|
||||||
|
self.clients.len()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all online fingerprints.
|
||||||
|
pub fn online_fingerprints(&self) -> Vec<&str> {
|
||||||
|
self.clients.keys().map(|s| s.as_str()).collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get alias for a fingerprint.
|
||||||
|
pub fn alias(&self, fp: &str) -> Option<&str> {
|
||||||
|
self.clients.get(fp).and_then(|c| c.alias.as_deref())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn register_unregister() {
|
||||||
|
let mut hub = SignalHub::new();
|
||||||
|
assert_eq!(hub.online_count(), 0);
|
||||||
|
assert!(!hub.is_online("alice"));
|
||||||
|
|
||||||
|
// Can't easily construct QuinnTransport in a unit test,
|
||||||
|
// so we just test the HashMap logic conceptually.
|
||||||
|
// Integration tests cover the full flow.
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -16,6 +16,9 @@ async-trait = { workspace = true }
|
|||||||
serde_json = "1"
|
serde_json = "1"
|
||||||
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
|
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
|
||||||
rcgen = "0.13"
|
rcgen = "0.13"
|
||||||
|
ed25519-dalek = { workspace = true }
|
||||||
|
hkdf = { workspace = true }
|
||||||
|
sha2 = { workspace = true }
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
|
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
|
||||||
|
|||||||
@@ -6,20 +6,74 @@ use std::time::Duration;
|
|||||||
use quinn::crypto::rustls::QuicClientConfig;
|
use quinn::crypto::rustls::QuicClientConfig;
|
||||||
use quinn::crypto::rustls::QuicServerConfig;
|
use quinn::crypto::rustls::QuicServerConfig;
|
||||||
|
|
||||||
/// Create a server configuration with a self-signed certificate (for testing).
|
/// Create a server configuration with a self-signed certificate (random keypair).
|
||||||
///
|
///
|
||||||
/// Tunes QUIC transport parameters for lossy VoIP:
|
/// The certificate changes on every call. Use `server_config_from_seed` for
|
||||||
/// - 30s idle timeout
|
/// a deterministic certificate that survives relay restarts.
|
||||||
/// - 5s keep-alive interval
|
|
||||||
/// - DATAGRAM extension enabled
|
|
||||||
/// - Conservative flow control for bandwidth-constrained links
|
|
||||||
pub fn server_config() -> (quinn::ServerConfig, Vec<u8>) {
|
pub fn server_config() -> (quinn::ServerConfig, Vec<u8>) {
|
||||||
let cert_key = rcgen::generate_simple_self_signed(vec!["localhost".to_string()])
|
let cert_key = rcgen::generate_simple_self_signed(vec!["localhost".to_string()])
|
||||||
.expect("failed to generate self-signed cert");
|
.expect("failed to generate self-signed cert");
|
||||||
let cert_der = rustls::pki_types::CertificateDer::from(cert_key.cert);
|
let cert_der = rustls::pki_types::CertificateDer::from(cert_key.cert);
|
||||||
let key_der =
|
let key_der =
|
||||||
rustls::pki_types::PrivateKeyDer::try_from(cert_key.key_pair.serialize_der()).unwrap();
|
rustls::pki_types::PrivateKeyDer::try_from(cert_key.key_pair.serialize_der()).unwrap();
|
||||||
|
build_server_config(cert_der, key_der)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a server configuration with a deterministic self-signed certificate
|
||||||
|
/// derived from a 32-byte seed. Same seed = same cert = same TLS fingerprint.
|
||||||
|
pub fn server_config_from_seed(seed: &[u8; 32]) -> (quinn::ServerConfig, Vec<u8>) {
|
||||||
|
use ed25519_dalek::pkcs8::EncodePrivateKey;
|
||||||
|
use ed25519_dalek::SigningKey;
|
||||||
|
use hkdf::Hkdf;
|
||||||
|
use sha2::Sha256;
|
||||||
|
|
||||||
|
// Derive Ed25519 key bytes from seed via HKDF
|
||||||
|
let hk = Hkdf::<Sha256>::new(None, seed);
|
||||||
|
let mut ed_bytes = [0u8; 32];
|
||||||
|
hk.expand(b"wzp-tls-ed25519", &mut ed_bytes)
|
||||||
|
.expect("HKDF expand failed");
|
||||||
|
|
||||||
|
// Create Ed25519 signing key and export as PKCS8 DER
|
||||||
|
let signing_key = SigningKey::from_bytes(&ed_bytes);
|
||||||
|
let pkcs8_doc = signing_key.to_pkcs8_der()
|
||||||
|
.expect("failed to encode Ed25519 key as PKCS8");
|
||||||
|
let key_der_for_rcgen = rustls::pki_types::PrivateKeyDer::try_from(pkcs8_doc.as_bytes().to_vec())
|
||||||
|
.expect("failed to wrap PKCS8 DER");
|
||||||
|
|
||||||
|
// Create rcgen KeyPair from DER
|
||||||
|
let key_pair = rcgen::KeyPair::from_der_and_sign_algo(
|
||||||
|
&key_der_for_rcgen,
|
||||||
|
&rcgen::PKCS_ED25519,
|
||||||
|
)
|
||||||
|
.expect("failed to create KeyPair from seed-derived Ed25519 key");
|
||||||
|
|
||||||
|
// Build self-signed cert with this deterministic keypair
|
||||||
|
let params = rcgen::CertificateParams::new(vec!["localhost".to_string()])
|
||||||
|
.expect("failed to create CertificateParams");
|
||||||
|
let cert = params.self_signed(&key_pair).expect("failed to self-sign cert");
|
||||||
|
let cert_der = rustls::pki_types::CertificateDer::from(cert.der().to_vec());
|
||||||
|
let key_der = rustls::pki_types::PrivateKeyDer::try_from(key_pair.serialize_der())
|
||||||
|
.expect("failed to serialize key DER");
|
||||||
|
|
||||||
|
build_server_config(cert_der, key_der)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Compute a hex-formatted SHA-256 fingerprint of a DER-encoded certificate.
|
||||||
|
///
|
||||||
|
/// Format: `xx:xx:xx:xx:...` (32 bytes = 64 hex chars with colons).
|
||||||
|
pub fn tls_fingerprint(cert_der: &[u8]) -> String {
|
||||||
|
use sha2::{Sha256, Digest};
|
||||||
|
let hash = Sha256::digest(cert_der);
|
||||||
|
hash.iter()
|
||||||
|
.map(|b| format!("{b:02x}"))
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join(":")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_server_config(
|
||||||
|
cert_der: rustls::pki_types::CertificateDer<'static>,
|
||||||
|
key_der: rustls::pki_types::PrivateKeyDer<'static>,
|
||||||
|
) -> (quinn::ServerConfig, Vec<u8>) {
|
||||||
let mut server_crypto = rustls::ServerConfig::builder()
|
let mut server_crypto = rustls::ServerConfig::builder()
|
||||||
.with_no_client_auth()
|
.with_no_client_auth()
|
||||||
.with_single_cert(vec![cert_der.clone()], key_der)
|
.with_single_cert(vec![cert_der.clone()], key_der)
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ pub mod path_monitor;
|
|||||||
pub mod quic;
|
pub mod quic;
|
||||||
pub mod reliable;
|
pub mod reliable;
|
||||||
|
|
||||||
pub use config::{client_config, server_config};
|
pub use config::{client_config, server_config, server_config_from_seed, tls_fingerprint};
|
||||||
pub use connection::{accept, connect, create_endpoint};
|
pub use connection::{accept, connect, create_endpoint};
|
||||||
pub use path_monitor::PathMonitor;
|
pub use path_monitor::PathMonitor;
|
||||||
pub use quic::QuinnTransport;
|
pub use quic::QuinnTransport;
|
||||||
|
|||||||
@@ -33,6 +33,13 @@ impl QuinnTransport {
|
|||||||
&self.connection
|
&self.connection
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Send raw bytes as a QUIC datagram (no MediaPacket framing).
|
||||||
|
pub fn send_raw_datagram(&self, data: &[u8]) -> Result<(), TransportError> {
|
||||||
|
self.connection
|
||||||
|
.send_datagram(bytes::Bytes::copy_from_slice(data))
|
||||||
|
.map_err(|e| TransportError::Internal(format!("datagram: {e}")))
|
||||||
|
}
|
||||||
|
|
||||||
/// Close the QUIC connection immediately (synchronous, no async needed).
|
/// Close the QUIC connection immediately (synchronous, no async needed).
|
||||||
/// The relay will detect the close and remove this participant from the room.
|
/// The relay will detect the close and remove this participant from the room.
|
||||||
pub fn close_now(&self) {
|
pub fn close_now(&self) {
|
||||||
@@ -136,7 +143,7 @@ impl MediaTransport for QuinnTransport {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
match datagram::deserialize_media(data) {
|
match datagram::deserialize_media(data.clone()) {
|
||||||
Some(packet) => {
|
Some(packet) => {
|
||||||
// Record receive observation
|
// Record receive observation
|
||||||
{
|
{
|
||||||
@@ -149,8 +156,10 @@ impl MediaTransport for QuinnTransport {
|
|||||||
Ok(Some(packet))
|
Ok(Some(packet))
|
||||||
}
|
}
|
||||||
None => {
|
None => {
|
||||||
tracing::warn!("received malformed media datagram");
|
tracing::warn!(len = data.len(), "skipping malformed media datagram, continuing");
|
||||||
Ok(None)
|
// Don't return Ok(None) — that signals connection closed.
|
||||||
|
// Recurse to read the next datagram instead.
|
||||||
|
Box::pin(self.recv_media()).await
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
115
debug/INCIDENT-2026-04-06-art-gc-sigbus.md
Normal file
115
debug/INCIDENT-2026-04-06-art-gc-sigbus.md
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
# Incident Report: SIGBUS in ART GC During Audio Thread JNI Calls
|
||||||
|
|
||||||
|
**Date:** 2026-04-06
|
||||||
|
**Severity:** High — app crash (SIGBUS) mid-call
|
||||||
|
**Status:** Root-caused, fix proposed
|
||||||
|
**Affects:** Android 16 (API 36) devices with concurrent mark-compact GC
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
The app crashes with SIGBUS (signal 7, BUS_ADRERR) during an active call. The crash occurs in ART's garbage collector or JIT compiler, NOT in our Rust native code or AudioRing buffer. Both `wzp-capture` and `wzp-playout` Kotlin threads are affected.
|
||||||
|
|
||||||
|
## Crash Details
|
||||||
|
|
||||||
|
### Crash 1: wzp-capture (18:42, after 476s of call)
|
||||||
|
|
||||||
|
```
|
||||||
|
Fatal signal 7 (SIGBUS), code 2 (BUS_ADRERR), fault addr 0x720009be38
|
||||||
|
tid 19697 (wzp-capture), pid 17885 (com.wzp.phone)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Backtrace:**
|
||||||
|
```
|
||||||
|
#00 art::StackVisitor::WalkStack
|
||||||
|
#01 art::Thread::VisitRoots
|
||||||
|
#02 art::gc::collector::MarkCompact::ThreadFlipVisitor::Run
|
||||||
|
#03 art::Thread::EnsureFlipFunctionStarted
|
||||||
|
#04 CheckJNI::ReleasePrimitiveArrayElements ← JNI boundary
|
||||||
|
#05 android_media_AudioRecord_readInArray ← AudioRecord.read()
|
||||||
|
#09 com.wzp.audio.AudioPipeline.runCapture
|
||||||
|
```
|
||||||
|
|
||||||
|
**Root cause:** ART's concurrent mark-compact GC (`MarkCompact::ThreadFlipVisitor`) is flipping thread roots while the capture thread is in the middle of a JNI call (`AudioRecord.read()`). The GC's `EnsureFlipFunctionStarted` triggers a stack walk that hits an invalid address.
|
||||||
|
|
||||||
|
### Crash 2: wzp-playout (19:17, mid-call)
|
||||||
|
|
||||||
|
```
|
||||||
|
Fatal signal 7 (SIGBUS), code 2 (BUS_ADRERR), fault addr 0x225eb98
|
||||||
|
tid 32574 (wzp-playout), pid 32479 (com.wzp.phone)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Backtrace:**
|
||||||
|
```
|
||||||
|
#00 com.wzp.audio.AudioPipeline.runPlayout ← JIT-compiled code
|
||||||
|
#01 art_quick_osr_stub ← On-Stack Replacement
|
||||||
|
#02 art::jit::Jit::MaybeDoOnStackReplacement
|
||||||
|
#03-#04 art::interpreter::ExecuteSwitchImplCpp
|
||||||
|
```
|
||||||
|
|
||||||
|
**Root cause:** ART's JIT compiler performed On-Stack Replacement (OSR) on the hot playout loop. The OSR stub references a code address (`0x225eb98`) that is no longer valid — likely because the GC moved the compiled code in memory during concurrent compaction.
|
||||||
|
|
||||||
|
## Why This Happens
|
||||||
|
|
||||||
|
Android 16 introduced a new **concurrent mark-compact GC** (CMC) that moves objects in memory while other threads are running. This is safe for normal Java code because ART uses read barriers. But our audio threads have specific properties that stress this:
|
||||||
|
|
||||||
|
1. **`Thread.MAX_PRIORITY`** — audio threads run at the highest priority, starving the GC thread of CPU time. The GC may not complete its thread-flip before the audio thread resumes.
|
||||||
|
|
||||||
|
2. **Tight JNI loops** — `runCapture()` and `runPlayout()` loop every 20ms calling `AudioRecord.read()` / `AudioTrack.write()` via JNI. Each JNI transition is a GC safepoint, but the thread spends most of its time in native code where the GC can't flip it.
|
||||||
|
|
||||||
|
3. **Long-running JIT-compiled code** — the hot loop gets JIT-compiled and may undergo OSR. If the GC compacts memory while OSR is in progress, the stub can reference stale addresses.
|
||||||
|
|
||||||
|
4. **Daemon threads that never exit** — our threads are parked with `Thread.sleep(Long.MAX_VALUE)` after the call ends (to avoid the libcrypto TLS destructor crash). These zombie threads accumulate GC root scan work.
|
||||||
|
|
||||||
|
## Evidence This Is Not Our Bug
|
||||||
|
|
||||||
|
| Component | Evidence |
|
||||||
|
|-----------|---------|
|
||||||
|
| **AudioRing** | Not in any backtrace. All crash frames are in `libart.so` (ART runtime) |
|
||||||
|
| **Rust native code** | `libwzp_android.so` not in any crash frame |
|
||||||
|
| **JNI bridge** | Crash happens during `ReleasePrimitiveArrayElements` (ART internal), not during our JNI calls |
|
||||||
|
| **Timing** | Crashes after 476s and mid-call — not during init or teardown |
|
||||||
|
|
||||||
|
## Proposed Fix
|
||||||
|
|
||||||
|
### Option A: Disable concurrent GC compaction for audio threads (recommended)
|
||||||
|
|
||||||
|
Use `dalvik.vm.gctype` or per-thread GC pinning to prevent the mark-compact collector from moving objects referenced by audio threads.
|
||||||
|
|
||||||
|
**Not directly controllable from app code.** But we can reduce GC pressure:
|
||||||
|
|
||||||
|
### Option B: Reduce JNI transitions in audio threads
|
||||||
|
|
||||||
|
Instead of calling `engine.writeAudio(pcm)` / `engine.readAudio(pcm)` via JNI on every 20ms frame, batch multiple frames or use `DirectByteBuffer` to share memory without JNI array copies.
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
- Allocate a `DirectByteBuffer` in Kotlin, share the pointer with Rust via JNI
|
||||||
|
- Audio threads write/read directly to the buffer (no JNI call per frame)
|
||||||
|
- Rust reads/writes from the same memory region
|
||||||
|
- Reduces JNI transitions from 100/sec to 0/sec per audio direction
|
||||||
|
|
||||||
|
### Option C: Use Android's Oboe (AAudio) natively from Rust
|
||||||
|
|
||||||
|
Skip the Kotlin AudioRecord/AudioTrack entirely. Use Oboe (which we already have as a dependency in `wzp-android/Cargo.toml`) to create native audio streams directly from Rust. The audio callbacks run in native code with no JNI, no GC interaction, no ART.
|
||||||
|
|
||||||
|
This is how the project was originally designed (see `audio_android.rs` with Oboe references) before switching to Kotlin AudioRecord for simplicity.
|
||||||
|
|
||||||
|
**Pros:** Eliminates the entire JNI audio path. No GC interaction. Lower latency.
|
||||||
|
**Cons:** Requires rewriting `AudioPipeline.kt` into Rust. Oboe setup is more complex.
|
||||||
|
|
||||||
|
### Option D: Pin audio thread objects to prevent GC movement
|
||||||
|
|
||||||
|
Use JNI `GetPrimitiveArrayCritical` instead of `GetShortArrayRegion` to pin the array in memory during the operation. This prevents the GC from moving the array while we're using it.
|
||||||
|
|
||||||
|
**Implementation:** Change `nativeWriteAudio` / `nativeReadAudio` JNI functions to use critical sections.
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
**Short term: Option B** (DirectByteBuffer) — reduces JNI transitions without major refactoring.
|
||||||
|
|
||||||
|
**Long term: Option C** (Oboe from Rust) — eliminates the problem entirely. This is the architecturally correct solution and matches the original design intent.
|
||||||
|
|
||||||
|
## Data Files
|
||||||
|
|
||||||
|
- Logcat from Nothing A059 (Android 16, API 36)
|
||||||
|
- Two crashes in the same session: 18:42 (capture, after 476s) and 19:17 (playout)
|
||||||
|
- Both SIGBUS/BUS_ADRERR, both in ART internal frames
|
||||||
175
debug/INCIDENT-2026-04-06-capture-thread-use-after-free.md
Normal file
175
debug/INCIDENT-2026-04-06-capture-thread-use-after-free.md
Normal file
@@ -0,0 +1,175 @@
|
|||||||
|
# Incident Report: Native Crash in Capture Thread — Use-After-Free on Engine Handle
|
||||||
|
|
||||||
|
**Date:** 2026-04-06
|
||||||
|
**Severity:** Critical — app crash (SIGSEGV) on call hangup
|
||||||
|
**Status:** Root-caused, fix pending
|
||||||
|
**Affects:** Android client only
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
The app crashes with a native SIGSEGV during or shortly after call hangup. The crash occurs in JIT-compiled code inside `AudioPipeline.runCapture()`. The root cause is a use-after-free: the capture thread calls `engine.writeAudio()` via JNI after the engine's native handle has been freed by `teardown()` on the ViewModel thread.
|
||||||
|
|
||||||
|
## Crash Stacktrace
|
||||||
|
|
||||||
|
```
|
||||||
|
04-06 13:05:42.707 F DEBUG: #09 pc 000000000250696c /memfd:jit-cache (deleted) (com.wzp.audio.AudioPipeline.runCapture+3228)
|
||||||
|
04-06 13:05:42.707 F DEBUG: #14 pc 0000000000005270 <anonymous:730900d000> (com.wzp.audio.AudioPipeline.start$lambda$0+0)
|
||||||
|
04-06 13:05:42.708 F DEBUG: #19 pc 00000000000044cc <anonymous:730900d000> (com.wzp.audio.AudioPipeline.$r8$lambda$0rYcivupwvyN4SgBXhsroKmTlo8+0)
|
||||||
|
04-06 13:05:42.708 F DEBUG: #24 pc 00000000000042e4 <anonymous:730900d000> (com.wzp.audio.AudioPipeline$$ExternalSyntheticLambda0.run+0)
|
||||||
|
```
|
||||||
|
|
||||||
|
This is a tombstone (signal crash), not a Java exception. The `F DEBUG` tag indicates a native crash handler (debuggerd) captured the signal.
|
||||||
|
|
||||||
|
## Root Cause
|
||||||
|
|
||||||
|
### The Race Condition
|
||||||
|
|
||||||
|
Two threads operate on the engine concurrently without synchronization:
|
||||||
|
|
||||||
|
**Thread 1: `wzp-capture` (AudioRecord thread, MAX_PRIORITY)**
|
||||||
|
```kotlin
|
||||||
|
// AudioPipeline.runCapture() — runs in a tight loop
|
||||||
|
while (running) {
|
||||||
|
val read = recorder.read(pcm, 0, FRAME_SAMPLES)
|
||||||
|
if (read > 0) {
|
||||||
|
engine.writeAudio(pcm) // <-- JNI call to native engine
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Thread 2: ViewModel/UI thread (normal priority)**
|
||||||
|
```kotlin
|
||||||
|
// CallViewModel.teardown()
|
||||||
|
stopAudio() // sets AudioPipeline.running = false
|
||||||
|
engine?.stopCall() // tells Rust to stop
|
||||||
|
engine?.destroy() // frees native memory, sets nativeHandle = 0L
|
||||||
|
engine = null
|
||||||
|
```
|
||||||
|
|
||||||
|
### The Kotlin Guard is Insufficient
|
||||||
|
|
||||||
|
`WzpEngine.writeAudio()` has a guard:
|
||||||
|
```kotlin
|
||||||
|
fun writeAudio(pcm: ShortArray): Int {
|
||||||
|
if (nativeHandle == 0L) return 0 // check
|
||||||
|
return nativeWriteAudio(nativeHandle, pcm) // use
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This is a **TOCTOU (time-of-check/time-of-use) race**:
|
||||||
|
1. Capture thread checks `nativeHandle != 0L` → true
|
||||||
|
2. ViewModel thread calls `destroy()`, which calls `nativeDestroy(handle)` then sets `nativeHandle = 0L`
|
||||||
|
3. Capture thread calls `nativeWriteAudio(handle, pcm)` with the now-freed handle
|
||||||
|
4. The JNI function dereferences `handle` as a pointer → **SIGSEGV**
|
||||||
|
|
||||||
|
The same race exists for `readAudio()` on the `wzp-playout` thread.
|
||||||
|
|
||||||
|
### Why `stopAudio()` Doesn't Prevent This
|
||||||
|
|
||||||
|
`AudioPipeline.stop()` sets `running = false` but does **NOT join or wait** for the threads:
|
||||||
|
```kotlin
|
||||||
|
fun stop() {
|
||||||
|
running = false
|
||||||
|
// Don't join — threads are parked as daemons to avoid native TLS crash
|
||||||
|
captureThread = null
|
||||||
|
playoutThread = null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The threads are intentionally not joined because of a separate bug: exiting a JNI-calling thread triggers a `SIGSEGV in OPENSSL_free` due to libcrypto TLS destructors on Android. The threads instead "park" with `Thread.sleep(Long.MAX_VALUE)` after the loop exits.
|
||||||
|
|
||||||
|
But the problem is the **window between `running = false` and the thread actually checking it**. The capture thread may be blocked in `recorder.read()` (which blocks for 20ms per frame) or in the middle of `engine.writeAudio()` when `destroy()` is called.
|
||||||
|
|
||||||
|
### Timeline of the Crash
|
||||||
|
|
||||||
|
```
|
||||||
|
T=0ms ViewModel: stopAudio() → sets running=false
|
||||||
|
T=0ms ViewModel: stopStatsPolling()
|
||||||
|
T=0ms ViewModel: engine.stopCall() — Rust stops internal tasks
|
||||||
|
T=1ms ViewModel: engine.destroy() — frees native memory
|
||||||
|
↑ nativeHandle = 0L
|
||||||
|
|
||||||
|
T=0-20ms Capture thread: still in recorder.read() or writeAudio()
|
||||||
|
→ if in writeAudio(), the nativeHandle check passed BEFORE destroy()
|
||||||
|
→ JNI dereferences freed pointer → SIGSEGV
|
||||||
|
```
|
||||||
|
|
||||||
|
## Affected Code
|
||||||
|
|
||||||
|
### Files with the race
|
||||||
|
|
||||||
|
| File | Line(s) | Issue |
|
||||||
|
|------|---------|-------|
|
||||||
|
| `android/.../WzpEngine.kt` | 107-108, 116-117 | TOCTOU on `nativeHandle` in `writeAudio()` / `readAudio()` |
|
||||||
|
| `android/.../CallViewModel.kt` | 257-262 | `stopAudio()` + `destroy()` without waiting for audio threads to quiesce |
|
||||||
|
| `android/.../AudioPipeline.kt` | 80-82 | `stop()` doesn't synchronize with running threads |
|
||||||
|
|
||||||
|
### Files with the thread parking workaround
|
||||||
|
|
||||||
|
| File | Line(s) | Context |
|
||||||
|
|------|---------|---------|
|
||||||
|
| `android/.../AudioPipeline.kt` | 57-58, 69-70 | Threads parked after loop exit to avoid libcrypto TLS crash |
|
||||||
|
| `android/.../AudioPipeline.kt` | 96-101 | `parkThread()` — `Thread.sleep(Long.MAX_VALUE)` |
|
||||||
|
|
||||||
|
## Constraints for the Fix
|
||||||
|
|
||||||
|
1. **Cannot join audio threads** — joining triggers a separate SIGSEGV in `OPENSSL_free` when the thread's TLS destructors fire (documented in `AudioPipeline.kt` comments). The parking workaround must be preserved.
|
||||||
|
|
||||||
|
2. **Must guarantee no JNI calls after `destroy()`** — the native handle is a raw pointer; any dereference after free is undefined behavior.
|
||||||
|
|
||||||
|
3. **Must not add blocking waits on the UI thread** — `teardown()` runs on the ViewModel thread which must remain responsive.
|
||||||
|
|
||||||
|
4. **The `@Volatile running` flag is necessary but not sufficient** — it prevents new loop iterations but doesn't help with in-flight JNI calls.
|
||||||
|
|
||||||
|
5. **Both `writeAudio` and `readAudio` have the same race** — the fix must cover both the capture and playout paths.
|
||||||
|
|
||||||
|
## Reproduction
|
||||||
|
|
||||||
|
The crash is timing-dependent. It's most likely to occur when:
|
||||||
|
- The capture thread is in the middle of a `writeAudio()` JNI call when `destroy()` is called
|
||||||
|
- More likely on slower devices or under CPU pressure (GC, thermal throttling)
|
||||||
|
- Can happen on every hangup, but only crashes ~10-30% of the time due to the timing window
|
||||||
|
|
||||||
|
## Analysis of Possible Fix Approaches
|
||||||
|
|
||||||
|
### Approach A: Add a synchronization gate in the JNI bridge
|
||||||
|
|
||||||
|
Use a `ReentrantReadWriteLock` or `AtomicBoolean` in `WzpEngine.kt`:
|
||||||
|
- Audio threads acquire a read lock / check the flag before JNI calls
|
||||||
|
- `destroy()` acquires a write lock / sets the flag and waits for in-flight calls to drain
|
||||||
|
|
||||||
|
**Pro:** Clean, solves the race directly.
|
||||||
|
**Con:** Adding a lock to the audio hot path (every 20ms). `ReentrantReadWriteLock` is not lock-free. However, the read-lock path is uncontended 99.99% of the time (write-lock only during destroy), so contention is negligible.
|
||||||
|
|
||||||
|
### Approach B: Defer `destroy()` until audio threads have stopped
|
||||||
|
|
||||||
|
Instead of calling `destroy()` in `teardown()`, set a flag and have the audio threads call `destroy()` after they exit the loop (before parking).
|
||||||
|
|
||||||
|
**Pro:** No locks on hot path.
|
||||||
|
**Con:** Complex lifecycle — which thread calls destroy? What if both threads race to destroy? Need a `CountDownLatch` or similar.
|
||||||
|
|
||||||
|
### Approach C: Make the JNI handle atomically invalidated
|
||||||
|
|
||||||
|
Use `AtomicLong` for `nativeHandle` and use `compareAndExchange` in `destroy()` + `getAndCheck` pattern in audio calls.
|
||||||
|
|
||||||
|
**Pro:** Lock-free.
|
||||||
|
**Con:** Still has a TOCTOU window — the thread can load the handle, then it gets CAS'd to 0, then the thread uses the stale handle. Doesn't fully solve the race without combining with a reference count or epoch.
|
||||||
|
|
||||||
|
### Approach D: Introduce a destroy latch
|
||||||
|
|
||||||
|
Add a `CountDownLatch(1)` that audio threads wait on before parking. `teardown()` sets `running=false`, then `await`s the latch (with timeout), then calls `destroy()`. Each audio thread counts down the latch after exiting the loop.
|
||||||
|
|
||||||
|
Actually this needs a `CountDownLatch(2)` — one for each thread (capture + playout).
|
||||||
|
|
||||||
|
**Pro:** Guarantees no in-flight JNI calls at destroy time. No locks on hot path.
|
||||||
|
**Con:** `teardown()` blocks for up to one frame duration (~20ms) waiting for threads to exit their loops. Acceptable for a hangup path.
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
**Approach D (destroy latch)** is the cleanest. The 20ms worst-case wait is imperceptible on the hangup path, and it provides a hard guarantee that no JNI calls are in flight when `destroy()` runs. Combined with the existing `running` volatile flag, the audio threads exit their loops within one frame and count down the latch.
|
||||||
|
|
||||||
|
If the latch times out (e.g., AudioRecord.read() is stuck), `destroy()` proceeds anyway — the `panic::catch_unwind` in the JNI bridge will catch the invalid access as a panic rather than a SIGSEGV (though this is best-effort; a true SIGSEGV from freed memory is not catchable).
|
||||||
|
|
||||||
|
## Data Files
|
||||||
|
|
||||||
|
The crash was captured from the Nothing A059 device at 13:05:42 on 2026-04-06. The tombstone is in the device's `/data/tombstones/` directory. The logcat output shows the crash frames.
|
||||||
747
docs/ADMINISTRATION.md
Normal file
747
docs/ADMINISTRATION.md
Normal file
@@ -0,0 +1,747 @@
|
|||||||
|
# WarzonePhone Relay Administration Guide
|
||||||
|
|
||||||
|
This document covers deploying, configuring, and operating wzp-relay instances, including federation setup, monitoring, and troubleshooting.
|
||||||
|
|
||||||
|
## Relay Deployment
|
||||||
|
|
||||||
|
### Binary
|
||||||
|
|
||||||
|
Build and run the relay directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build release binary
|
||||||
|
cargo build --release --bin wzp-relay
|
||||||
|
|
||||||
|
# Run with defaults (listen on 0.0.0.0:4433, room mode, no auth)
|
||||||
|
./target/release/wzp-relay
|
||||||
|
|
||||||
|
# Run with config file
|
||||||
|
./target/release/wzp-relay --config /etc/wzp/relay.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Remote Build (Linux)
|
||||||
|
|
||||||
|
The included build script provisions a temporary Hetzner Cloud VPS, builds all binaries, and downloads them:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Requires: hcloud CLI authenticated, SSH key "wz" registered
|
||||||
|
./scripts/build-linux.sh
|
||||||
|
# Outputs to: target/linux-x86_64/
|
||||||
|
```
|
||||||
|
|
||||||
|
Produces: `wzp-relay`, `wzp-client`, `wzp-client-audio`, `wzp-web`, `wzp-bench`.
|
||||||
|
|
||||||
|
### Docker
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM rust:1.85 AS builder
|
||||||
|
WORKDIR /src
|
||||||
|
COPY . .
|
||||||
|
RUN cargo build --release --bin wzp-relay
|
||||||
|
|
||||||
|
FROM debian:bookworm-slim
|
||||||
|
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY --from=builder /src/target/release/wzp-relay /usr/local/bin/
|
||||||
|
EXPOSE 4433/udp
|
||||||
|
EXPOSE 9090/tcp
|
||||||
|
VOLUME /data
|
||||||
|
ENV HOME=/data
|
||||||
|
ENTRYPOINT ["wzp-relay"]
|
||||||
|
CMD ["--config", "/data/relay.toml", "--metrics-port", "9090"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Build and run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -t wzp-relay .
|
||||||
|
docker run -d \
|
||||||
|
--name wzp-relay \
|
||||||
|
-p 4433:4433/udp \
|
||||||
|
-p 9090:9090/tcp \
|
||||||
|
-v /opt/wzp:/data \
|
||||||
|
wzp-relay
|
||||||
|
```
|
||||||
|
|
||||||
|
### systemd
|
||||||
|
|
||||||
|
Create `/etc/systemd/system/wzp-relay.service`:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=WarzonePhone Relay
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=wzp
|
||||||
|
Group=wzp
|
||||||
|
ExecStart=/usr/local/bin/wzp-relay --config /etc/wzp/relay.toml
|
||||||
|
Restart=always
|
||||||
|
RestartSec=5
|
||||||
|
LimitNOFILE=65536
|
||||||
|
|
||||||
|
# Security hardening
|
||||||
|
NoNewPrivileges=yes
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=yes
|
||||||
|
ReadWritePaths=/var/lib/wzp
|
||||||
|
PrivateTmp=yes
|
||||||
|
|
||||||
|
Environment=HOME=/var/lib/wzp
|
||||||
|
Environment=RUST_LOG=info
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
Setup:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create service user
|
||||||
|
useradd --system --home-dir /var/lib/wzp --create-home wzp
|
||||||
|
|
||||||
|
# Install binary and config
|
||||||
|
cp target/release/wzp-relay /usr/local/bin/
|
||||||
|
mkdir -p /etc/wzp
|
||||||
|
cp relay.toml /etc/wzp/
|
||||||
|
|
||||||
|
# Enable and start
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable --now wzp-relay
|
||||||
|
journalctl -u wzp-relay -f
|
||||||
|
```
|
||||||
|
|
||||||
|
## TOML Configuration Reference
|
||||||
|
|
||||||
|
All fields have defaults. A minimal config file only needs the fields you want to override.
|
||||||
|
|
||||||
|
### Core Settings
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `listen_addr` | string (socket addr) | `"0.0.0.0:4433"` | UDP address to listen on for incoming QUIC connections |
|
||||||
|
| `remote_relay` | string (socket addr) | none | Remote relay address for forward mode. Disables room mode when set |
|
||||||
|
| `max_sessions` | integer | `100` | Maximum concurrent client sessions |
|
||||||
|
| `log_level` | string | `"info"` | Logging level: trace, debug, info, warn, error |
|
||||||
|
|
||||||
|
### Jitter Buffer
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `jitter_target_depth` | integer | `50` | Target buffer depth in packets (50 = 1 second at 20ms frames) |
|
||||||
|
| `jitter_max_depth` | integer | `250` | Maximum buffer depth in packets (250 = 5 seconds) |
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `auth_url` | string | none | featherChat auth validation URL. When set, clients must send a bearer token as their first signal message. The relay validates it via `POST <auth_url>` |
|
||||||
|
|
||||||
|
### Metrics and Monitoring
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `metrics_port` | integer | none | Port for the Prometheus HTTP metrics endpoint. Disabled if not set |
|
||||||
|
| `probe_targets` | array of socket addrs | `[]` | Peer relay addresses to probe for health monitoring (1 Ping/s each) |
|
||||||
|
| `probe_mesh` | boolean | `false` | Enable mesh mode for probe targets |
|
||||||
|
|
||||||
|
### Media Processing
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `trunking_enabled` | boolean | `false` | Enable trunk batching for outgoing media. Packs multiple session packets into one QUIC datagram, reducing overhead |
|
||||||
|
|
||||||
|
### WebSocket / Browser Support
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `ws_port` | integer | none | Port for WebSocket listener (browser clients). Disabled if not set |
|
||||||
|
| `static_dir` | string | none | Directory to serve static files (HTML/JS/WASM) |
|
||||||
|
|
||||||
|
### Federation
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `peers` | array of PeerConfig | `[]` | Outbound federation peer relays |
|
||||||
|
| `trusted` | array of TrustedConfig | `[]` | Inbound federation trust list |
|
||||||
|
| `global_rooms` | array of GlobalRoomConfig | `[]` | Room names to bridge across federation |
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
| Field | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `debug_tap` | string | none | Log packet headers for matching rooms. Use `"*"` for all rooms, or a specific room name |
|
||||||
|
|
||||||
|
### PeerConfig Fields
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `url` | string | yes | Address of the peer relay (e.g., `"193.180.213.68:4433"`) |
|
||||||
|
| `fingerprint` | string | yes | Expected TLS certificate fingerprint (hex with colons) |
|
||||||
|
| `label` | string | no | Human-readable label for logging |
|
||||||
|
|
||||||
|
### TrustedConfig Fields
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `fingerprint` | string | yes | Expected TLS certificate fingerprint (hex with colons) |
|
||||||
|
| `label` | string | no | Human-readable label for logging |
|
||||||
|
|
||||||
|
### GlobalRoomConfig Fields
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `name` | string | yes | Room name to bridge across federation (e.g., `"android"`) |
|
||||||
|
|
||||||
|
## CLI Flags Reference
|
||||||
|
|
||||||
|
```
|
||||||
|
wzp-relay [--config <path>] [--listen <addr>] [--remote <addr>]
|
||||||
|
[--auth-url <url>] [--metrics-port <port>]
|
||||||
|
[--probe <addr>]... [--probe-mesh] [--mesh-status]
|
||||||
|
[--trunking] [--global-room <name>]...
|
||||||
|
[--debug-tap <room>]
|
||||||
|
[--ws-port <port>] [--static-dir <dir>]
|
||||||
|
```
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--config <path>` | Load configuration from TOML file. CLI flags override config file values |
|
||||||
|
| `--listen <addr>` | Listen address (default: `0.0.0.0:4433`) |
|
||||||
|
| `--remote <addr>` | Remote relay for forwarding mode. Disables room mode |
|
||||||
|
| `--auth-url <url>` | featherChat auth endpoint (e.g., `https://chat.example.com/v1/auth/validate`) |
|
||||||
|
| `--metrics-port <port>` | Prometheus metrics HTTP port (e.g., `9090`) |
|
||||||
|
| `--probe <addr>` | Peer relay to probe for health monitoring. Repeatable |
|
||||||
|
| `--probe-mesh` | Enable mesh mode for probes |
|
||||||
|
| `--mesh-status` | Print mesh health table and exit (diagnostic) |
|
||||||
|
| `--trunking` | Enable trunk batching for outgoing media |
|
||||||
|
| `--global-room <name>` | Declare a room as global (bridged across federation). Repeatable |
|
||||||
|
| `--debug-tap <room>` | Log packet headers for a room (`"*"` for all rooms) |
|
||||||
|
| `--event-log <path>` | Write JSONL protocol event log for federation debugging |
|
||||||
|
| `--version`, `-V` | Print build git hash and exit |
|
||||||
|
| `--ws-port <port>` | WebSocket listener port for browser clients |
|
||||||
|
| `--static-dir <dir>` | Directory to serve static files from |
|
||||||
|
| `--help`, `-h` | Print help and exit |
|
||||||
|
|
||||||
|
CLI flags always override config file values when both are specified.
|
||||||
|
|
||||||
|
## Federation Setup
|
||||||
|
|
||||||
|
### Concepts
|
||||||
|
|
||||||
|
- **`[[peers]]`** -- outbound: relays we connect TO. Requires address + fingerprint
|
||||||
|
- **`[[trusted]]`** -- inbound: relays we accept connections FROM. Requires fingerprint only (they connect to us)
|
||||||
|
- **`[[global_rooms]]`** -- rooms bridged across all federated peers. Participants on different relays in the same global room hear each other
|
||||||
|
|
||||||
|
### Getting Your Relay's Fingerprint
|
||||||
|
|
||||||
|
When a relay starts, it logs its TLS fingerprint:
|
||||||
|
|
||||||
|
```
|
||||||
|
INFO TLS certificate (deterministic from relay identity) tls_fingerprint="a5d6:e3c6:5ae7:185c:4eb1:af89:daed:4a43"
|
||||||
|
INFO federation: to peer with this relay, add to relay.toml:
|
||||||
|
INFO [[peers]]
|
||||||
|
INFO url = "193.180.213.68:4433"
|
||||||
|
INFO fingerprint = "a5d6:e3c6:5ae7:185c:4eb1:af89:daed:4a43"
|
||||||
|
```
|
||||||
|
|
||||||
|
Share this information with the administrator of the peer relay.
|
||||||
|
|
||||||
|
### Unknown Peer Connections
|
||||||
|
|
||||||
|
When an unknown relay tries to federate, the log shows:
|
||||||
|
|
||||||
|
```
|
||||||
|
WARN unknown relay wants to federate addr=10.0.0.5:12345 fp="7f2a:b391:0c44:..."
|
||||||
|
INFO to accept, add to relay.toml:
|
||||||
|
INFO [[trusted]]
|
||||||
|
INFO fingerprint = "7f2a:b391:0c44:..."
|
||||||
|
INFO label = "Relay at 10.0.0.5:12345"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example Configurations
|
||||||
|
|
||||||
|
### Single Relay (Minimal)
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# /etc/wzp/relay.toml
|
||||||
|
# Minimal config -- all defaults, just enable metrics
|
||||||
|
metrics_port = 9090
|
||||||
|
```
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wzp-relay --config /etc/wzp/relay.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Single Relay (Full Featured)
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# /etc/wzp/relay.toml
|
||||||
|
listen_addr = "0.0.0.0:4433"
|
||||||
|
max_sessions = 200
|
||||||
|
log_level = "info"
|
||||||
|
|
||||||
|
# Metrics
|
||||||
|
metrics_port = 9090
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
auth_url = "https://chat.example.com/v1/auth/validate"
|
||||||
|
|
||||||
|
# Browser support
|
||||||
|
ws_port = 8080
|
||||||
|
static_dir = "/opt/wzp/web"
|
||||||
|
|
||||||
|
# Performance
|
||||||
|
trunking_enabled = true
|
||||||
|
|
||||||
|
# Jitter buffer tuning
|
||||||
|
jitter_target_depth = 50
|
||||||
|
jitter_max_depth = 250
|
||||||
|
```
|
||||||
|
|
||||||
|
### Two-Relay Federation
|
||||||
|
|
||||||
|
**Relay A** (`relay-a.toml` on 193.180.213.68):
|
||||||
|
|
||||||
|
```toml
|
||||||
|
listen_addr = "0.0.0.0:4433"
|
||||||
|
metrics_port = 9090
|
||||||
|
|
||||||
|
# Outbound: connect to Relay B
|
||||||
|
[[peers]]
|
||||||
|
url = "10.0.0.5:4433"
|
||||||
|
fingerprint = "7f2a:b391:0c44:9e1d:a8b2:c5d7:e3f0:1234"
|
||||||
|
label = "Relay B (US)"
|
||||||
|
|
||||||
|
# Accept inbound from Relay B
|
||||||
|
[[trusted]]
|
||||||
|
fingerprint = "7f2a:b391:0c44:9e1d:a8b2:c5d7:e3f0:1234"
|
||||||
|
label = "Relay B (US)"
|
||||||
|
|
||||||
|
# Bridge these rooms
|
||||||
|
[[global_rooms]]
|
||||||
|
name = "android"
|
||||||
|
|
||||||
|
[[global_rooms]]
|
||||||
|
name = "general"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Relay B** (`relay-b.toml` on 10.0.0.5):
|
||||||
|
|
||||||
|
```toml
|
||||||
|
listen_addr = "0.0.0.0:4433"
|
||||||
|
metrics_port = 9090
|
||||||
|
|
||||||
|
# Outbound: connect to Relay A
|
||||||
|
[[peers]]
|
||||||
|
url = "193.180.213.68:4433"
|
||||||
|
fingerprint = "a5d6:e3c6:5ae7:185c:4eb1:af89:daed:4a43"
|
||||||
|
label = "Relay A (EU)"
|
||||||
|
|
||||||
|
# Accept inbound from Relay A
|
||||||
|
[[trusted]]
|
||||||
|
fingerprint = "a5d6:e3c6:5ae7:185c:4eb1:af89:daed:4a43"
|
||||||
|
label = "Relay A (EU)"
|
||||||
|
|
||||||
|
# Same global rooms
|
||||||
|
[[global_rooms]]
|
||||||
|
name = "android"
|
||||||
|
|
||||||
|
[[global_rooms]]
|
||||||
|
name = "general"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Three-Relay Chain (Full Mesh)
|
||||||
|
|
||||||
|
For three relays (A, B, C) in full mesh federation, each relay needs peers and trusted entries for the other two:
|
||||||
|
|
||||||
|
**Relay A** (EU):
|
||||||
|
|
||||||
|
```toml
|
||||||
|
listen_addr = "0.0.0.0:4433"
|
||||||
|
metrics_port = 9090
|
||||||
|
|
||||||
|
# Probe all peers
|
||||||
|
probe_targets = ["10.0.0.5:4433", "10.0.0.9:4433"]
|
||||||
|
probe_mesh = true
|
||||||
|
|
||||||
|
# Peers
|
||||||
|
[[peers]]
|
||||||
|
url = "10.0.0.5:4433"
|
||||||
|
fingerprint = "7f2a:b391:0c44:9e1d:a8b2:c5d7:e3f0:1234"
|
||||||
|
label = "Relay B (US)"
|
||||||
|
|
||||||
|
[[peers]]
|
||||||
|
url = "10.0.0.9:4433"
|
||||||
|
fingerprint = "3c8e:d2a1:f7b5:6049:81c3:e9d4:a2f6:5678"
|
||||||
|
label = "Relay C (APAC)"
|
||||||
|
|
||||||
|
# Trust
|
||||||
|
[[trusted]]
|
||||||
|
fingerprint = "7f2a:b391:0c44:9e1d:a8b2:c5d7:e3f0:1234"
|
||||||
|
label = "Relay B (US)"
|
||||||
|
|
||||||
|
[[trusted]]
|
||||||
|
fingerprint = "3c8e:d2a1:f7b5:6049:81c3:e9d4:a2f6:5678"
|
||||||
|
label = "Relay C (APAC)"
|
||||||
|
|
||||||
|
# Global rooms
|
||||||
|
[[global_rooms]]
|
||||||
|
name = "android"
|
||||||
|
|
||||||
|
[[global_rooms]]
|
||||||
|
name = "general"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Relay B** and **Relay C** follow the same pattern, listing the other two relays in their `[[peers]]` and `[[trusted]]` sections.
|
||||||
|
|
||||||
|
## Monitoring
|
||||||
|
|
||||||
|
### Prometheus Metrics
|
||||||
|
|
||||||
|
Enable with `--metrics-port <port>` or `metrics_port` in TOML. The relay exposes metrics at `GET /metrics` on the specified HTTP port.
|
||||||
|
|
||||||
|
#### Relay Metrics
|
||||||
|
|
||||||
|
| Metric | Type | Labels | Description |
|
||||||
|
|--------|------|--------|-------------|
|
||||||
|
| `wzp_relay_active_sessions` | Gauge | -- | Current active sessions |
|
||||||
|
| `wzp_relay_active_rooms` | Gauge | -- | Current active rooms |
|
||||||
|
| `wzp_relay_packets_forwarded_total` | Counter | `room` | Total packets forwarded |
|
||||||
|
| `wzp_relay_bytes_forwarded_total` | Counter | `room` | Total bytes forwarded |
|
||||||
|
| `wzp_relay_auth_attempts_total` | Counter | `result` (ok/fail) | Auth validation attempts |
|
||||||
|
| `wzp_relay_handshake_duration_seconds` | Histogram | -- | Crypto handshake time |
|
||||||
|
|
||||||
|
#### Per-Session Metrics
|
||||||
|
|
||||||
|
| Metric | Type | Labels | Description |
|
||||||
|
|--------|------|--------|-------------|
|
||||||
|
| `wzp_relay_session_jitter_buffer_depth` | Gauge | `session_id` | Buffer depth per session |
|
||||||
|
| `wzp_relay_session_loss_pct` | Gauge | `session_id` | Packet loss percentage |
|
||||||
|
| `wzp_relay_session_rtt_ms` | Gauge | `session_id` | Round-trip time |
|
||||||
|
| `wzp_relay_session_underruns_total` | Counter | `session_id` | Jitter buffer underruns |
|
||||||
|
| `wzp_relay_session_overruns_total` | Counter | `session_id` | Jitter buffer overruns |
|
||||||
|
|
||||||
|
#### Inter-Relay Probe Metrics
|
||||||
|
|
||||||
|
| Metric | Type | Labels | Description |
|
||||||
|
|--------|------|--------|-------------|
|
||||||
|
| `wzp_probe_rtt_ms` | Gauge | `target` | RTT to peer relay |
|
||||||
|
| `wzp_probe_loss_pct` | Gauge | `target` | Loss to peer relay |
|
||||||
|
| `wzp_probe_jitter_ms` | Gauge | `target` | Jitter to peer relay |
|
||||||
|
| `wzp_probe_up` | Gauge | `target` | 1 if reachable, 0 if not |
|
||||||
|
|
||||||
|
### Prometheus Scrape Config
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# prometheus.yml
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: 'wzp-relay'
|
||||||
|
static_configs:
|
||||||
|
- targets:
|
||||||
|
- 'relay-a:9090'
|
||||||
|
- 'relay-b:9090'
|
||||||
|
scrape_interval: 10s
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grafana Dashboard
|
||||||
|
|
||||||
|
A pre-built dashboard is available at `docs/grafana-dashboard.json`. Import it into Grafana for:
|
||||||
|
|
||||||
|
1. **Relay Health** -- active sessions, rooms, packets/s, bytes/s
|
||||||
|
2. **Call Quality** -- per-session jitter depth, loss%, RTT, underruns over time
|
||||||
|
3. **Inter-Relay Mesh** -- latency heatmap, probe status, loss trends
|
||||||
|
4. **Web Bridge** -- active connections, frames bridged, auth failures
|
||||||
|
|
||||||
|
### Event Log (Protocol Analyzer)
|
||||||
|
|
||||||
|
Use `--event-log` to write a JSONL event log that traces every federation media packet through the relay pipeline. Essential for debugging federation audio issues.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wzp-relay --config relay.toml --event-log /tmp/events.jsonl
|
||||||
|
```
|
||||||
|
|
||||||
|
Each media packet emits events at every decision point:
|
||||||
|
- `federation_ingress` — packet arrived from a peer relay
|
||||||
|
- `local_deliver` — packet delivered to local participants
|
||||||
|
- `dedup_drop` — packet dropped as duplicate
|
||||||
|
- `rate_limit_drop` — packet dropped by rate limiter
|
||||||
|
- `room_not_found` — packet for unknown room
|
||||||
|
- `local_deliver_error` — delivery to local client failed
|
||||||
|
|
||||||
|
Analyze with:
|
||||||
|
```bash
|
||||||
|
# Count events by type
|
||||||
|
cat events.jsonl | python3 -c "
|
||||||
|
import json, collections, sys
|
||||||
|
c = collections.Counter()
|
||||||
|
for l in sys.stdin: c[json.loads(l)['event']] += 1
|
||||||
|
for k,v in sorted(c.items(), key=lambda x:-x[1]): print(f' {k}: {v}')
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Remote Version Check
|
||||||
|
|
||||||
|
Verify a deployed relay's version without SSH:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wzp-client --version-check <relay-addr:port>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug Tap
|
||||||
|
|
||||||
|
Use `--debug-tap` to log packet headers for debugging:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Log headers for room "android"
|
||||||
|
wzp-relay --debug-tap android
|
||||||
|
|
||||||
|
# Log headers for all rooms
|
||||||
|
wzp-relay --debug-tap '*'
|
||||||
|
```
|
||||||
|
|
||||||
|
Or in TOML:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
debug_tap = "android"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mesh Status
|
||||||
|
|
||||||
|
Print the current mesh health table (diagnostic):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wzp-relay --mesh-status
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
### featherChat Token Validation
|
||||||
|
|
||||||
|
When `--auth-url` is set, the relay requires clients to send an `AuthToken` signal message as their first message after QUIC connection. The relay validates the token by calling:
|
||||||
|
|
||||||
|
```
|
||||||
|
POST <auth_url>
|
||||||
|
Content-Type: application/json
|
||||||
|
Authorization: Bearer <token>
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"valid": true,
|
||||||
|
"fingerprint": "a5d6:e3c6:...",
|
||||||
|
"alias": "username"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If validation fails, the client is disconnected.
|
||||||
|
|
||||||
|
### Without Authentication
|
||||||
|
|
||||||
|
When `--auth-url` is not set, any client can connect. The relay logs:
|
||||||
|
|
||||||
|
```
|
||||||
|
INFO auth disabled -- any client can connect (use --auth-url to enable)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Identity Persistence
|
||||||
|
|
||||||
|
### Relay Identity File
|
||||||
|
|
||||||
|
The relay stores its identity seed at `~/.wzp/relay-identity` (a 64-character hex string). This seed:
|
||||||
|
|
||||||
|
- Is generated automatically on first run
|
||||||
|
- Persists across restarts
|
||||||
|
- Derives the relay's Ed25519 signing key and X25519 key agreement key
|
||||||
|
- Derives the TLS certificate deterministically (same seed = same cert = same fingerprint)
|
||||||
|
|
||||||
|
If the identity file is corrupted, the relay generates a new one and logs a warning. This will change the relay's TLS fingerprint, requiring federation peers to update their config.
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
|
||||||
|
Back up the identity file to preserve the relay's fingerprint:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cp ~/.wzp/relay-identity /secure/backup/relay-identity
|
||||||
|
```
|
||||||
|
|
||||||
|
To restore, copy the file back before starting the relay.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
| Problem | Cause | Solution |
|
||||||
|
|---------|-------|---------|
|
||||||
|
| "unknown argument" on startup | Unrecognized CLI flag | Check `wzp-relay --help` for valid flags |
|
||||||
|
| "failed to load config" | Invalid TOML syntax | Validate TOML file with `toml-cli` or similar |
|
||||||
|
| "auth failed" for all clients | Wrong `auth_url` or featherChat server down | Verify URL is reachable: `curl -X POST <auth_url>` |
|
||||||
|
| "session rejected" | Max sessions reached | Increase `max_sessions` in config |
|
||||||
|
| Clients cannot connect | Firewall blocking UDP 4433 | Open UDP port 4433 in firewall |
|
||||||
|
| Federation "unknown relay wants to federate" | Peer's fingerprint not in `[[trusted]]` | Add the logged fingerprint to `[[trusted]]` |
|
||||||
|
| Federation "fingerprint mismatch" | Peer relay restarted with new identity | Update the fingerprint in `[[peers]]` config |
|
||||||
|
| Federation audio silent on consecutive connects | Dedup filter or jitter buffer state | Verify relay is running latest build with time-based dedup |
|
||||||
|
| Federation participant shows wrong relay label | Hub relay not propagating original labels | Update relay to latest build (label preservation fix) |
|
||||||
|
| Federation disconnect takes >15 seconds | QUIC idle timeout + stale sweeper | Normal: sweeper runs every 5s with 15s TTL. Use latest client with SIGTERM handler for instant disconnect |
|
||||||
|
| High packet loss between relays | Network congestion or misconfiguration | Check `wzp_probe_loss_pct` metric; consider relay chaining |
|
||||||
|
| Jitter buffer overruns | Packets arriving faster than playout | Increase `jitter_max_depth` |
|
||||||
|
| Jitter buffer underruns | Packets arriving too slowly or lost | Check network quality; increase `jitter_target_depth` |
|
||||||
|
| "probe connection closed" | Peer relay unreachable or crashed | Check peer relay status; will auto-reconnect |
|
||||||
|
| WebSocket clients cannot connect | `ws_port` not set | Add `--ws-port <port>` or `ws_port` in TOML |
|
||||||
|
| Browser mic access denied | Not using HTTPS | Use TLS termination in front of the relay or serve via `wzp-web --tls` |
|
||||||
|
|
||||||
|
### Log Level Tuning
|
||||||
|
|
||||||
|
Set `RUST_LOG` environment variable for fine-grained control:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# All relay logs at debug level
|
||||||
|
RUST_LOG=debug wzp-relay
|
||||||
|
|
||||||
|
# Only federation at trace, everything else at info
|
||||||
|
RUST_LOG=info,wzp_relay::federation=trace wzp-relay
|
||||||
|
|
||||||
|
# Quiet mode -- only warnings and errors
|
||||||
|
RUST_LOG=warn wzp-relay
|
||||||
|
```
|
||||||
|
|
||||||
|
### Health Checks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check if relay is listening
|
||||||
|
nc -zu relay-host 4433
|
||||||
|
|
||||||
|
# Check metrics endpoint
|
||||||
|
curl -s http://relay-host:9090/metrics | head -20
|
||||||
|
|
||||||
|
# Check active sessions
|
||||||
|
curl -s http://relay-host:9090/metrics | grep wzp_relay_active_sessions
|
||||||
|
|
||||||
|
# Check federation probe health
|
||||||
|
curl -s http://relay-host:9090/metrics | grep wzp_probe_up
|
||||||
|
```
|
||||||
|
|
||||||
|
## Build Pipelines
|
||||||
|
|
||||||
|
All production artifacts (Android APK, Linux x86_64 binaries, Windows `.exe`) are built on **SepehrHomeserverdk** using Docker, not on developer workstations. The pipelines are fire-and-forget: a local script invokes a `tmux` session on the remote, the build runs in a Docker container, and the artifact is uploaded to `paste.dk.manko.yoga` (rustypaste) with a notification sent to `ntfy.sh/wzp` on start and completion.
|
||||||
|
|
||||||
|
### Docker images
|
||||||
|
|
||||||
|
Two long-lived images live on the remote:
|
||||||
|
|
||||||
|
| Image | Used by | Base | Key contents |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `wzp-android-builder` | Android APK (Tauri mobile + legacy Kotlin), Linux x86_64 relay/CLI | Debian bookworm | Rust stable with Android targets, cargo-ndk, NDK 26.1, Android SDK (API 34 + 35 + 36), JDK 17, Gradle 8.5, Node.js 20, cmake, ninja, tauri-cli 2.x |
|
||||||
|
| `wzp-windows-builder` | Windows x86_64 `.exe` | Debian bookworm | Rust stable with `x86_64-pc-windows-msvc` target, cargo-xwin (with pre-warmed MSVC CRT + Windows SDK cache), Node.js 20, cmake, ninja, clang, lld, nasm |
|
||||||
|
|
||||||
|
Both images are rebuilt rarely — once the base toolchain is stable, rebuilds are only needed to pick up new dependencies or security patches.
|
||||||
|
|
||||||
|
**Rebuilding an image** (fire-and-forget, ~10 min on a warm base):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Windows
|
||||||
|
./scripts/build-windows-docker.sh --image-build
|
||||||
|
|
||||||
|
# Android (upload and rebuild handled by the Android build script itself — see
|
||||||
|
# its --image-build flag or equivalent)
|
||||||
|
```
|
||||||
|
|
||||||
|
The `--image-build` flag uploads the local Dockerfile to the remote, kicks off `docker build` under `nohup`, and returns immediately. Monitor with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh SepehrHomeserverdk 'tail -f /tmp/wzp-windows-image-build.log'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pipeline: Android APK (Tauri Mobile)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/build-tauri-android.sh # Full: pull + build + upload + notify
|
||||||
|
./scripts/build-tauri-android.sh --no-pull # Skip git fetch
|
||||||
|
./scripts/build-tauri-android.sh --clean # Force-clean Rust target
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Branch**: `android-rewrite`
|
||||||
|
- **Image**: `wzp-android-builder`
|
||||||
|
- **Build command**: `cargo tauri android build --release`
|
||||||
|
- **Output**: `wzp-release.apk` → uploaded to rustypaste
|
||||||
|
- **Notifications**: start + completion to `ntfy.sh/wzp`
|
||||||
|
- **Remote artifact path**: `/mnt/storage/manBuilder/data/cache-android/target/…/release/app-release.apk`
|
||||||
|
|
||||||
|
### Pipeline: Linux x86_64 (relay + CLI + bench + web)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/build-linux-docker.sh # Fire-and-forget
|
||||||
|
./scripts/build-linux-docker.sh --no-pull # Skip git fetch
|
||||||
|
./scripts/build-linux-docker.sh --clean # Force-clean target
|
||||||
|
./scripts/build-linux-docker.sh --install # Wait for completion and download locally
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Branch**: `feat/android-voip-client` (script default — override by editing the script or passing an env var)
|
||||||
|
- **Image**: `wzp-android-builder` (shared, not a separate Linux-only image)
|
||||||
|
- **Targets built**: `wzp-relay`, `wzp-client`, `wzp-client-audio` (with `--features audio`), `wzp-web`, `wzp-bench`
|
||||||
|
- **Output**: `wzp-linux-x86_64.tar.gz` with all five binaries → uploaded to rustypaste
|
||||||
|
- **Local landing dir** (with `--install`): `target/linux-x86_64/`
|
||||||
|
|
||||||
|
### Pipeline: Windows x86_64 (`wzp-desktop.exe`)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/build-windows-docker.sh # Full: pull + build + download locally
|
||||||
|
./scripts/build-windows-docker.sh --no-pull # Skip git fetch
|
||||||
|
./scripts/build-windows-docker.sh --rust # Force-clean target-windows cache
|
||||||
|
./scripts/build-windows-docker.sh --image-build # Rebuild the Docker image (fire-and-forget)
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Branch**: `feat/desktop-audio-rewrite`
|
||||||
|
- **Image**: `wzp-windows-builder`
|
||||||
|
- **Build command**: `cargo xwin build --release --target x86_64-pc-windows-msvc --bin wzp-desktop`
|
||||||
|
- **Output**: `wzp-desktop.exe` (~16 MB) → downloaded to `target/windows-exe/wzp-desktop.exe`, also uploaded to rustypaste
|
||||||
|
- **Target cache volume**: `target-windows` (separate from the Android target cache to avoid triple cross-contamination)
|
||||||
|
- **Shared cache volumes**: `cargo-registry`, `cargo-git` (shared with Android — both pipelines pull the same crates)
|
||||||
|
|
||||||
|
**A/B-preserving workflow** for testing audio backends: rename the prior `.exe` before re-running the build, so both coexist:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Preserve prior build as the noAEC baseline
|
||||||
|
mv target/windows-exe/wzp-desktop.exe target/windows-exe/wzp-desktop-noAEC.exe
|
||||||
|
./scripts/build-windows-docker.sh
|
||||||
|
ls -la target/windows-exe/
|
||||||
|
# wzp-desktop-noAEC.exe (previous build)
|
||||||
|
# wzp-desktop.exe (new build)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Alternative pipeline: Windows via Hetzner Cloud VPS
|
||||||
|
|
||||||
|
For situations where Docker image rebuilds would be disruptive, or for one-shot debug builds on a clean machine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/build-windows-cloud.sh # Full: create VM → build → download → destroy
|
||||||
|
./scripts/build-windows-cloud.sh --prepare # Create VM + install deps, don't build
|
||||||
|
./scripts/build-windows-cloud.sh --build # Build on existing VM
|
||||||
|
./scripts/build-windows-cloud.sh --transfer # Download .exe from existing VM
|
||||||
|
./scripts/build-windows-cloud.sh --destroy # Delete the VM
|
||||||
|
WZP_KEEP_VM=1 ./scripts/build-windows-cloud.sh # Don't auto-destroy after successful build
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Provider**: Hetzner Cloud
|
||||||
|
- **Default server type**: `cx33` (8 GB RAM, 8 vCPU — `cx23` with 4 GB OOMs on the tauri+rustls cross-compile)
|
||||||
|
- **Image**: `ubuntu-24.04`
|
||||||
|
- **SSH key**: must be named `wz` in Hetzner and loaded in the local ssh-agent
|
||||||
|
- **Reminder**: set `WZP_KEEP_VM=1` for multi-build sessions, then **remember to `--destroy` at end of day** so the VM isn't left running overnight. This is tracked in the auto-memory as `feedback_keep_windows_builder_vm.md`.
|
||||||
|
|
||||||
|
### Notifications
|
||||||
|
|
||||||
|
All pipelines post to `https://ntfy.sh/wzp`. Subscribe from your phone via the [ntfy.sh app](https://ntfy.sh/) to get push notifications on build start/success/failure. Messages include the short git hash and the rustypaste URL on success:
|
||||||
|
|
||||||
|
```
|
||||||
|
WZP Windows build OK [03a80a3] (16M)
|
||||||
|
https://paste.dk.manko.yoga/<uuid>/wzp-desktop.exe
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rustypaste credentials
|
||||||
|
|
||||||
|
Build pipelines read `rusty_address` and `rusty_auth_token` from the `.env` file at `/mnt/storage/manBuilder/.env` on SepehrHomeserverdk. Local scripts that upload directly (`build-windows-cloud.sh` when run in `--transfer` mode) read from `~/.wzp/rustypaste.env` with the same variable names. Both files must be kept in sync manually if rotated.
|
||||||
File diff suppressed because it is too large
Load Diff
139
docs/BRANCH-android-rewrite.md
Normal file
139
docs/BRANCH-android-rewrite.md
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
# Branch: `android-rewrite`
|
||||||
|
|
||||||
|
Pivot away from the legacy Kotlin + JNI Android client to a pure-Rust **Tauri 2.x Mobile** app that shares the same frontend and backend code as the desktop client.
|
||||||
|
|
||||||
|
## Why this branch exists
|
||||||
|
|
||||||
|
The Kotlin + JNI stack was a crash factory. Every failure mode we hit was at the Kotlin ↔ Rust boundary, and each fix uncovered the next layer of the onion:
|
||||||
|
|
||||||
|
| Symptom | Root cause | Fix |
|
||||||
|
|---|---|---|
|
||||||
|
| App crashed on launch before `onCreate` returned | `__init_tcb` / `pthread_create` bionic private symbols leaking out of `libwzp_android.so` because the Rust crate used `crate-type = ["cdylib", "staticlib"]`. rust-lang/rust#104707 documents that staticlib alongside cdylib leaks non-exported symbols from the staticlib into the cdylib, and Bionic's private internal pthread symbols got bound LOCALLY inside our `.so` instead of resolved against `libc.so` at `dlopen` time | Dropped `staticlib` from the crate-type list. `crate-type = ["cdylib", "rlib"]` only. |
|
||||||
|
| Stack overflow on `place_call` | `Dispatchers.IO` threads have a ~512 KB stack, too small for the Rust signal-connect path that does TLS handshake + quinn setup inside one closure | Launched JNI calls from a dedicated `java.lang.Thread` with an explicit 8 MB stack |
|
||||||
|
| `ring` / `libcrypto` TLS reuse crash on second call | tokio runtime got dropped between calls, but `ring` keeps a TLS-stored SSL context that is invalidated when the runtime thread is reused by a new runtime — `ring` sees stale context and segfaults | Single long-lived tokio runtime for the entire signal client lifetime; split `start()` into an inline `connect+register` path and a `run()` path on a separate thread to avoid the `thread::spawn` closure's stack overflow |
|
||||||
|
| Null dereference on register with fresh install | Identity seed file empty when it existed-but-was-blank, Rust side deref'd the zero-length slice | Generate seed if empty on register |
|
||||||
|
|
||||||
|
Every fix kept the app limping along but the fundamental design problem remained: **state management was split across a Kotlin ViewModel and a Rust engine, with a hand-rolled JNI bridge in between that had to be perfect to not crash**. The working desktop Tauri client (with the same Rust backend) had none of these problems because it spoke to the Rust code via in-process `invoke()` from a WebView, not JNI.
|
||||||
|
|
||||||
|
So: rewrite the Android app as a **Tauri 2.x Mobile app**, reusing the entire desktop codebase verbatim (`main.ts`, `style.css`, `index.html`, `main.rs`, `engine.rs` — everything). Tauri Mobile added Android support in v2, it's production-ready, and it eliminates the JNI boundary entirely.
|
||||||
|
|
||||||
|
The incident postmortem lives at [`docs/incident-tauri-android-init-tcb.md`](incident-tauri-android-init-tcb.md).
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ Tauri 2.x Mobile │
|
||||||
|
│ │
|
||||||
|
│ Android WebView ────────── HTML/JS/CSS │ ← Shared with desktop
|
||||||
|
│ │ (main.ts) │
|
||||||
|
│ │ │
|
||||||
|
│ invoke() ─────────────── Rust Commands │ ← Shared with desktop
|
||||||
|
│ (main.rs) │
|
||||||
|
│ │ │
|
||||||
|
│ ┌───────────────┼────────────┐ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ SignalMgr CallEngine Identity │ ← Shared crates
|
||||||
|
│ (signal_hub) (wzp-client) (wzp-crypto)│
|
||||||
|
│ │ │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ ▼ ▼ │
|
||||||
|
│ QUIC to relay Oboe audio (Android) │
|
||||||
|
│ via wzp-native cdylib │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**What is reused from desktop verbatim** (zero rewrite):
|
||||||
|
|
||||||
|
- `desktop/src/main.ts` — entire frontend
|
||||||
|
- `desktop/src/style.css` — all styling
|
||||||
|
- `desktop/src/identicon.ts` — identicon rendering
|
||||||
|
- `desktop/index.html` — HTML structure
|
||||||
|
- `desktop/src-tauri/src/main.rs` — all Tauri commands (`connect`, `disconnect`, `register_signal`, `place_call`, …)
|
||||||
|
- `desktop/src-tauri/src/engine.rs` — `CallEngine` wrapper
|
||||||
|
|
||||||
|
**What is Android-specific**:
|
||||||
|
|
||||||
|
- `desktop/src-tauri/src/android_audio.rs` — JVM-side audio routing (`AudioManager.setSpeakerphoneOn` for earpiece/speaker toggle). Runs from Tauri's existing JNI context — no hand-rolled bridge, Tauri owns the JVM hookup.
|
||||||
|
- `desktop/src-tauri/src/wzp_native.rs` — runtime `dlopen` of `libwzp_native.so`, a standalone cdylib crate (`crates/wzp-native`) that owns all C++ (Oboe bridge). Kept in its own crate so its C/C++ static archives never get statically linked into `wzp-desktop`'s `.so`, which would re-trigger the `__init_tcb` / pthread leak.
|
||||||
|
- `crates/wzp-native/` — the standalone C++/Oboe bridge cdylib. Loaded via `libloading` at runtime from `wzp_native.rs`. Provides capture + playout streams using Oboe's `Usage::VoiceCommunication` + `MODE_IN_COMMUNICATION` combo.
|
||||||
|
- Android-specific target dependencies in `desktop/src-tauri/Cargo.toml` (`jni`, `ndk-context`, `libloading`) — no CPAL, no VPIO.
|
||||||
|
|
||||||
|
## Key architectural decisions
|
||||||
|
|
||||||
|
### 1. `wzp-native` as a standalone cdylib loaded via `libloading`
|
||||||
|
|
||||||
|
The alternative — linking `wzp-native` as a regular Rust dep with C++ static archives — would cause the same `__init_tcb` crash that killed the Kotlin version. By making `wzp-native` its own cdylib and `dlopen`-ing it at runtime, Bionic's `libc.so` resolves every symbol at load time the way it's supposed to, and no private TCB symbols leak.
|
||||||
|
|
||||||
|
### 2. `crate-type = ["cdylib", "rlib"]` only (no `staticlib`)
|
||||||
|
|
||||||
|
Same reason. The `rlib` output is needed so the `wzp-desktop` binary target can link against the library; `cdylib` is needed for Android's `System.loadLibrary`; `staticlib` would reintroduce the symbol-leak bug.
|
||||||
|
|
||||||
|
### 3. Oboe audio config
|
||||||
|
|
||||||
|
`Usage::VoiceCommunication` + Java-side `MODE_IN_COMMUNICATION`. **Never** call `setAudioApi(AAudio)` explicitly — on some devices (Nothing Phone in particular) it causes Oboe to open the wrong stream type and audio goes silent. Let Oboe pick the audio API automatically. This is documented in the auto-memory `project_tauri_android_audio.md`.
|
||||||
|
|
||||||
|
### 4. Speaker/earpiece toggle uses `tokio::task::spawn_blocking`
|
||||||
|
|
||||||
|
Oboe's `stop()` + `start()` cycle is synchronous and can block for 50–200 ms. Calling it on the tokio executor stalls every other async task (including the QUIC datagram loop), dropping audio packets. Wrapping the toggle in `spawn_blocking` isolates it to a dedicated thread pool. Fixed in commit `76a4c53`.
|
||||||
|
|
||||||
|
## Build pipeline
|
||||||
|
|
||||||
|
Docker on SepehrHomeserverdk, same pattern as the Android legacy pipeline and the Windows pipeline:
|
||||||
|
|
||||||
|
```
|
||||||
|
./scripts/build-tauri-android.sh # Full: pull + build + ntfy + rustypaste
|
||||||
|
./scripts/build-tauri-android.sh --pull # Explicit git pull (default)
|
||||||
|
./scripts/build-tauri-android.sh --clean # Blow away the Rust target cache
|
||||||
|
```
|
||||||
|
|
||||||
|
**Image**: `wzp-android-builder` (shared with the legacy Kotlin pipeline). The Dockerfile was extended to install Node.js 20 LTS, Android API level 36, build-tools 35.0.0, tauri-cli 2.x, and all four Android Rust targets on top of the legacy NDK 26.1 + cargo-ndk + Gradle setup. Both pipelines coexist in the same image.
|
||||||
|
|
||||||
|
**Output**: `wzp-release.apk` uploaded to rustypaste, URL delivered via `ntfy.sh/wzp`.
|
||||||
|
|
||||||
|
## Known quirks (Tauri Mobile specific)
|
||||||
|
|
||||||
|
1. **tauri-cli `android init` writes absolute paths** into `gradle.properties` for the NDK path. Those paths are local to wherever `android init` was run, so they break any cross-machine build unless overridden with `ANDROID_NDK_HOME` at build time. The build script exports `ANDROID_NDK_HOME` explicitly to work around this.
|
||||||
|
|
||||||
|
2. **API 36 vs API 34 coexistence**: the legacy Kotlin pipeline targets API 34, Tauri Mobile 2.x wants compileSdk 36. The shared Docker image installs both SDK levels so neither pipeline needs to reinstall.
|
||||||
|
|
||||||
|
3. **Identity seed lives in Android-specific app data dir**: `/data/data/com.wzp.phone/files/.wzp/identity` instead of `$HOME/.wzp/identity`. The shared `load_or_create_seed()` function in `desktop/src-tauri/src/lib.rs` uses Tauri's `app_data_dir()` which resolves correctly on both Android and desktop — no per-platform code needed.
|
||||||
|
|
||||||
|
4. **Direct calls on macOS previously hit an identity mismatch bug** — the `CallEngine` was using `$HOME/.wzp/identity` directly while `register_signal` used Tauri's `app_data_dir()`. Fixed by routing both through `load_or_create_seed()` (commit `2fd9465`). This was important for cross-platform consistency.
|
||||||
|
|
||||||
|
## Current state (snapshot)
|
||||||
|
|
||||||
|
What works:
|
||||||
|
|
||||||
|
- Tauri Mobile scaffold builds and runs on Android
|
||||||
|
- Signal hub connect + register works
|
||||||
|
- Room mode (SFU group calls) works with Oboe audio
|
||||||
|
- Direct 1:1 calls work with full parity to desktop
|
||||||
|
- Speaker/earpiece toggle works without stalling the audio pipeline
|
||||||
|
- Call history, recent contacts, deregister UI all present (inherited from desktop)
|
||||||
|
|
||||||
|
What remains (task list refs in parens):
|
||||||
|
|
||||||
|
- Background service for keeping signal alive when app is backgrounded (#19)
|
||||||
|
- Proper permission requests (microphone, notifications) on first launch (#19)
|
||||||
|
- Incoming call notification while backgrounded (#19)
|
||||||
|
- App icon + splash screen (#19)
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
- **Build**: `./scripts/build-tauri-android.sh` — verify the APK lands on rustypaste and installs on device.
|
||||||
|
- **Smoke test**: Install → open app → Register → Place call → Receive call. No crashes, audio flows both ways.
|
||||||
|
- **Speaker toggle**: During a call, toggle speaker/earpiece several times in rapid succession. Audio should never stop, and the toggle should respond within ~200 ms.
|
||||||
|
- **Stress test**: Call for 10+ minutes continuous. No memory growth, no packet loss beyond what's attributable to the network.
|
||||||
|
|
||||||
|
## Files of interest
|
||||||
|
|
||||||
|
| Path | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `desktop/src-tauri/src/lib.rs` | Shared Tauri commands (desktop + Android) |
|
||||||
|
| `desktop/src-tauri/src/android_audio.rs` | JVM-side speaker/earpiece routing |
|
||||||
|
| `desktop/src-tauri/src/wzp_native.rs` | Runtime dlopen of libwzp_native.so |
|
||||||
|
| `crates/wzp-native/` | Standalone C++/Oboe cdylib, loaded at runtime |
|
||||||
|
| `scripts/build-tauri-android.sh` | Remote Docker build pipeline |
|
||||||
|
| `scripts/Dockerfile.android-builder` | Shared Android Docker image (legacy + Tauri) |
|
||||||
|
| `docs/incident-tauri-android-init-tcb.md` | Postmortem of the Kotlin+JNI crash cascade |
|
||||||
665
docs/DESIGN.md
665
docs/DESIGN.md
@@ -1,168 +1,591 @@
|
|||||||
# WarzonePhone Detailed Design Decisions
|
# WarzonePhone Design Document
|
||||||
|
|
||||||
## Why Opus + Codec2 (Not Just One)
|
> Custom encrypted VoIP protocol built in Rust. Designed for hostile network conditions: 5-70% packet loss, 100-500 kbps throughput, 300-800 ms RTT. Multi-platform: Desktop (Tauri), Android, CLI, Web.
|
||||||
|
|
||||||
The dual-codec architecture is driven by the extreme range of network conditions WarzonePhone targets:
|
## System Overview
|
||||||
|
|
||||||
**Opus** (24/16/6 kbps) is the clear choice for normal to degraded conditions. It offers excellent quality at moderate bitrates, has built-in inband FEC and DTX (discontinuous transmission), and the `audiopus` crate provides mature Rust bindings to libopus. Opus operates at 48 kHz natively.
|
WarzonePhone is a voice-over-IP system built from scratch in Rust, targeting reliable encrypted voice communication over severely degraded networks. The protocol uses adaptive codecs (Opus + Codec2), fountain-code FEC (RaptorQ), and end-to-end ChaCha20-Poly1305 encryption over a QUIC transport layer.
|
||||||
|
|
||||||
**Codec2** (3200/1200 bps) is a narrowband vocoder designed specifically for HF radio links with extreme bandwidth constraints. At 1200 bps (1.2 kbps), it produces intelligible speech in only 6 bytes per 40ms frame -- roughly 20x lower bitrate than Opus at its minimum. The pure-Rust `codec2` crate means no C dependencies for this codec. Codec2 operates at 8 kHz, so the adaptive layer handles 48 kHz <-> 8 kHz resampling transparently.
|
The system comprises three categories of components:
|
||||||
|
|
||||||
The `AdaptiveEncoder`/`AdaptiveDecoder` in `crates/wzp-codec/src/adaptive.rs` hold both codec instances and switch between them based on the active `QualityProfile`. This avoids codec re-initialization latency during tier transitions.
|
1. **Protocol crates** -- a Rust workspace of 7 library crates with a star dependency graph enabling parallel development
|
||||||
|
2. **Client applications** -- Desktop (Tauri), Android (Kotlin + JNI), CLI, and Web (browser bridge)
|
||||||
|
3. **Relay infrastructure** -- SFU relay daemons with federation, health probing, and Prometheus metrics
|
||||||
|
|
||||||
**Bandwidth comparison with FEC overhead:**
|
### Design Principles
|
||||||
|
|
||||||
| Tier | Codec Bitrate | FEC Ratio | Total Bandwidth |
|
- **User sovereignty** -- client-driven route selection, BIP39 identity backup, no central authority
|
||||||
|------|--------------|-----------|----------------|
|
- **End-to-end encryption** -- relays never see plaintext audio; SFU forwarding preserves E2E encryption
|
||||||
| GOOD | 24 kbps | 20% | ~28.8 kbps |
|
- **Adaptive resilience** -- automatic codec and FEC switching based on observed network quality
|
||||||
| DEGRADED | 6 kbps | 50% | ~9.0 kbps |
|
- **Parallel development** -- star dependency graph allows 5 agents/developers to work simultaneously with zero merge conflicts
|
||||||
| CATASTROPHIC | 1.2 kbps | 100% | ~2.4 kbps |
|
|
||||||
|
|
||||||
At the catastrophic tier, the entire call (audio + FEC + headers) fits within approximately 3 kbps, which is viable even over severely degraded links.
|
## Architecture
|
||||||
|
|
||||||
## Why RaptorQ Over Reed-Solomon
|
### Crate Overview
|
||||||
|
|
||||||
**Reed-Solomon** is a classical block erasure code. It works well but has fixed-rate overhead: you must decide in advance how many repair symbols to generate, and decoding requires receiving exactly K of any K+R symbols.
|
The workspace contains 7 core crates plus integration binaries:
|
||||||
|
|
||||||
**RaptorQ** (RFC 6330) is a fountain code with several advantages for VoIP:
|
| Crate | Purpose | Key Dependencies |
|
||||||
|
|-------|---------|-----------------|
|
||||||
|
| `wzp-proto` | Protocol types, traits, wire format | serde, bytes |
|
||||||
|
| `wzp-codec` | Audio codecs (Opus, Codec2, RNNoise) | audiopus, codec2, nnnoiseless |
|
||||||
|
| `wzp-fec` | Forward error correction | raptorq |
|
||||||
|
| `wzp-crypto` | Cryptography and identity | ed25519-dalek, x25519-dalek, chacha20poly1305, bip39 |
|
||||||
|
| `wzp-transport` | QUIC transport layer | quinn, rustls |
|
||||||
|
| `wzp-relay` | Relay daemon (SFU, federation, metrics) | tokio, prometheus |
|
||||||
|
| `wzp-client` | Call engine and CLI | All above |
|
||||||
|
|
||||||
1. **Rateless**: You can generate an arbitrary number of repair symbols on the fly. If conditions worsen mid-block, you can generate additional repair without re-encoding.
|
Additional integration targets: `wzp-web` (browser bridge via WebSocket), Android native library (JNI), Desktop (Tauri).
|
||||||
|
|
||||||
2. **Efficient decoding**: RaptorQ can decode from any K symbols with high probability (typically K + 1 or K + 2 suffice), compared to Reed-Solomon which requires exactly K.
|
### Dependency Graph
|
||||||
|
|
||||||
3. **Lower computational complexity**: O(K) encoding and decoding time, compared to O(K^2) for Reed-Solomon. This matters for real-time audio at 50 frames/second.
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
PROTO["wzp-proto<br/>(Types, Traits, Wire Format)"]
|
||||||
|
|
||||||
4. **Variable block sizes**: The encoder handles 1-56403 source symbols per block (the WZP implementation uses 5-10, but the flexibility is there).
|
CODEC["wzp-codec<br/>(Opus + Codec2 + RNNoise)"]
|
||||||
|
FEC["wzp-fec<br/>(RaptorQ FEC)"]
|
||||||
|
CRYPTO["wzp-crypto<br/>(ChaCha20 + Identity)"]
|
||||||
|
TRANSPORT["wzp-transport<br/>(QUIC / Quinn)"]
|
||||||
|
|
||||||
The `raptorq` crate (v2) provides a well-tested pure-Rust implementation. The WZP FEC layer adds length-prefixed padding (2-byte LE prefix + zero-pad to 256 bytes) so that variable-length audio frames can be recovered exactly.
|
RELAY["wzp-relay<br/>(Relay Daemon)"]
|
||||||
|
CLIENT["wzp-client<br/>(CLI + Call Engine)"]
|
||||||
|
WEB["wzp-web<br/>(Browser Bridge)"]
|
||||||
|
DESKTOP["Desktop<br/>(Tauri + CPAL)"]
|
||||||
|
ANDROID["Android<br/>(Kotlin + JNI)"]
|
||||||
|
|
||||||
**FEC bandwidth math at different loss rates:**
|
PROTO --> CODEC
|
||||||
|
PROTO --> FEC
|
||||||
|
PROTO --> CRYPTO
|
||||||
|
PROTO --> TRANSPORT
|
||||||
|
|
||||||
|
CODEC --> CLIENT
|
||||||
|
FEC --> CLIENT
|
||||||
|
CRYPTO --> CLIENT
|
||||||
|
TRANSPORT --> CLIENT
|
||||||
|
|
||||||
|
CODEC --> RELAY
|
||||||
|
FEC --> RELAY
|
||||||
|
CRYPTO --> RELAY
|
||||||
|
TRANSPORT --> RELAY
|
||||||
|
|
||||||
|
CLIENT --> WEB
|
||||||
|
CLIENT --> DESKTOP
|
||||||
|
CLIENT --> ANDROID
|
||||||
|
TRANSPORT --> WEB
|
||||||
|
|
||||||
|
FC["warzone-protocol<br/>(featherChat Identity)"] -.->|path dep| CRYPTO
|
||||||
|
|
||||||
|
style PROTO fill:#6c5ce7,color:#fff
|
||||||
|
style RELAY fill:#ff9f43,color:#fff
|
||||||
|
style CLIENT fill:#00b894,color:#fff
|
||||||
|
style WEB fill:#0984e3,color:#fff
|
||||||
|
style DESKTOP fill:#0984e3,color:#fff
|
||||||
|
style ANDROID fill:#0984e3,color:#fff
|
||||||
|
style FC fill:#fd79a8,color:#fff
|
||||||
|
```
|
||||||
|
|
||||||
|
The star pattern ensures each leaf crate (`wzp-codec`, `wzp-fec`, `wzp-crypto`, `wzp-transport`) depends only on `wzp-proto` and never on each other. This enables:
|
||||||
|
|
||||||
|
- **Parallel development** -- 5 agents work on 5 crates with no merge conflicts
|
||||||
|
- **Independent testing** -- each crate has self-contained tests
|
||||||
|
- **Pluggability** -- any implementation can be swapped by implementing the same trait
|
||||||
|
- **Fast compilation** -- changing one leaf only recompiles that leaf and integration crates
|
||||||
|
|
||||||
|
## Audio Pipeline
|
||||||
|
|
||||||
|
### Encode Pipeline (Mic to Network)
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant Mic as Microphone
|
||||||
|
participant RNN as RNNoise Denoise
|
||||||
|
participant VAD as Silence Detector
|
||||||
|
participant ENC as Opus/Codec2 Encode
|
||||||
|
participant FEC as RaptorQ FEC Encode
|
||||||
|
participant INT as Interleaver
|
||||||
|
participant HDR as Header Assembly
|
||||||
|
participant CRYPT as ChaCha20-Poly1305
|
||||||
|
participant QUIC as QUIC Datagram
|
||||||
|
|
||||||
|
Mic->>RNN: PCM i16 x 960 (20ms @ 48kHz)
|
||||||
|
RNN->>VAD: Denoised samples (2 x 480)
|
||||||
|
alt Silence detected (>100ms)
|
||||||
|
VAD->>ENC: ComfortNoise packet (every 200ms)
|
||||||
|
else Active speech or hangover
|
||||||
|
VAD->>ENC: Active audio frame
|
||||||
|
end
|
||||||
|
ENC->>FEC: Compressed frame (padded to 256 bytes)
|
||||||
|
FEC->>FEC: Accumulate block (5-10 frames)
|
||||||
|
FEC->>INT: Source + repair symbols
|
||||||
|
INT->>HDR: Interleaved packets (depth=3)
|
||||||
|
HDR->>CRYPT: MediaHeader (12B) or MiniHeader (4B)
|
||||||
|
CRYPT->>QUIC: Header=AAD, Payload=encrypted
|
||||||
|
```
|
||||||
|
|
||||||
|
### Decode Pipeline (Network to Speaker)
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant QUIC as QUIC Datagram
|
||||||
|
participant CRYPT as ChaCha20-Poly1305
|
||||||
|
participant HDR as Header Parse
|
||||||
|
participant DEINT as De-interleaver
|
||||||
|
participant FEC as RaptorQ FEC Decode
|
||||||
|
participant JIT as Jitter Buffer
|
||||||
|
participant DEC as Opus/Codec2 Decode
|
||||||
|
participant SPK as Speaker
|
||||||
|
|
||||||
|
QUIC->>CRYPT: Encrypted packet
|
||||||
|
CRYPT->>HDR: Decrypt (header=AAD)
|
||||||
|
HDR->>DEINT: Parsed MediaHeader + payload
|
||||||
|
DEINT->>FEC: Reordered symbols
|
||||||
|
FEC->>FEC: Reconstruct from any K of K+R symbols
|
||||||
|
FEC->>JIT: Recovered audio frames
|
||||||
|
JIT->>JIT: Sequence-ordered BTreeMap
|
||||||
|
JIT->>DEC: Pop when depth >= target
|
||||||
|
DEC->>SPK: PCM i16 x 960
|
||||||
|
```
|
||||||
|
|
||||||
|
## Codec System
|
||||||
|
|
||||||
|
WarzonePhone uses a dual-codec architecture to cover the full range of network conditions:
|
||||||
|
|
||||||
|
### Opus (Primary)
|
||||||
|
|
||||||
|
Opus is the primary codec for normal to degraded conditions. It operates at 48 kHz natively with built-in inband FEC and DTX (discontinuous transmission). The `audiopus` crate provides mature Rust bindings to libopus.
|
||||||
|
|
||||||
|
| Profile | Bitrate | Frame Duration | FEC Ratio | Total Bandwidth | Use Case |
|
||||||
|
|---------|---------|---------------|-----------|----------------|----------|
|
||||||
|
| Studio 64k | 64 kbps | 20ms | 10% | 70.4 kbps | LAN, excellent WiFi |
|
||||||
|
| Studio 48k | 48 kbps | 20ms | 10% | 52.8 kbps | Good WiFi, wired |
|
||||||
|
| Studio 32k | 32 kbps | 20ms | 10% | 35.2 kbps | WiFi, LTE |
|
||||||
|
| Good (24k) | 24 kbps | 20ms | 20% | 28.8 kbps | WiFi, LTE, decent links |
|
||||||
|
| Opus 16k | 16 kbps | 20ms | 20% | 19.2 kbps | 3G, moderate congestion |
|
||||||
|
| Degraded (6k) | 6 kbps | 40ms | 50% | 9.0 kbps | 3G, congested WiFi |
|
||||||
|
|
||||||
|
### Codec2 (Fallback)
|
||||||
|
|
||||||
|
Codec2 is a narrowband vocoder designed for HF radio links with extreme bandwidth constraints. It operates at 8 kHz, and the adaptive layer handles 48 kHz <-> 8 kHz resampling transparently. The pure-Rust `codec2` crate means no C dependencies.
|
||||||
|
|
||||||
|
| Profile | Bitrate | Frame Duration | FEC Ratio | Total Bandwidth | Use Case |
|
||||||
|
|---------|---------|---------------|-----------|----------------|----------|
|
||||||
|
| Codec2 3200 | 3.2 kbps | 20ms | 50% | 4.8 kbps | Poor conditions |
|
||||||
|
| Catastrophic (1200) | 1.2 kbps | 40ms | 100% | 2.4 kbps | Satellite, extreme loss |
|
||||||
|
|
||||||
|
### ComfortNoise
|
||||||
|
|
||||||
|
When the silence detector identifies no speech activity for over 100ms, the encoder switches to emitting a ComfortNoise packet every 200ms instead of encoding silence. This provides approximately 50% bandwidth savings in typical conversations.
|
||||||
|
|
||||||
|
### Adaptive Switching
|
||||||
|
|
||||||
|
The `AdaptiveEncoder`/`AdaptiveDecoder` in `wzp-codec` hold both codec instances and switch between them based on the active `QualityProfile`. This avoids codec re-initialization latency during tier transitions. The `AdaptiveQualityController` in `wzp-proto` manages tier transitions with hysteresis:
|
||||||
|
|
||||||
|
- **Downgrade**: 3 consecutive bad reports (2 on cellular networks)
|
||||||
|
- **Upgrade**: 10 consecutive good reports (one tier at a time)
|
||||||
|
- **Network handoff**: WiFi-to-cellular switch triggers preemptive one-tier downgrade plus a temporary 10-second FEC boost (+20%)
|
||||||
|
|
||||||
|
Quality tier classification thresholds:
|
||||||
|
|
||||||
|
| Tier | WiFi/Unknown | Cellular |
|
||||||
|
|------|-------------|----------|
|
||||||
|
| Good | loss < 10%, RTT < 400ms | loss < 8%, RTT < 300ms |
|
||||||
|
| Degraded | loss 10-40%, RTT 400-600ms | loss 8-25%, RTT 300-500ms |
|
||||||
|
| Catastrophic | loss > 40%, RTT > 600ms | loss > 25%, RTT > 500ms |
|
||||||
|
|
||||||
|
## Forward Error Correction (FEC)
|
||||||
|
|
||||||
|
### Why RaptorQ Over Reed-Solomon
|
||||||
|
|
||||||
|
WarzonePhone uses RaptorQ (RFC 6330) fountain codes via the `raptorq` crate:
|
||||||
|
|
||||||
|
1. **Rateless** -- generate arbitrary repair symbols on the fly; if conditions worsen mid-block, generate additional repair without re-encoding
|
||||||
|
2. **Efficient decoding** -- decode from any K symbols with high probability (typically K + 1 or K + 2 suffice)
|
||||||
|
3. **Lower complexity** -- O(K) encoding/decoding time vs O(K^2) for Reed-Solomon
|
||||||
|
4. **Variable block sizes** -- 1-56,403 source symbols per block (WZP uses 5-10)
|
||||||
|
|
||||||
|
### FEC Block Structure
|
||||||
|
|
||||||
|
Each FEC block consists of 5-10 audio frames padded to 256-byte symbols with a 2-byte LE length prefix:
|
||||||
|
|
||||||
|
```
|
||||||
|
[len:u16 LE][audio_frame][zero_padding_to_256_bytes]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Loss Survival by FEC Ratio
|
||||||
|
|
||||||
With 5 source frames per block:
|
With 5 source frames per block:
|
||||||
- 20% repair (GOOD): 1 repair symbol. Survives loss of 1 out of 6 packets (16.7% loss).
|
|
||||||
- 50% repair (DEGRADED): 3 repair symbols. Survives loss of 3 out of 8 packets (37.5% loss).
|
|
||||||
- 100% repair (CATASTROPHIC): 5 repair symbols. Survives loss of 5 out of 10 packets (50% loss).
|
|
||||||
|
|
||||||
The benchmark (`wzp-bench --fec --loss 30`) dynamically scales the FEC ratio to survive the requested loss percentage.
|
| FEC Ratio | Repair Symbols | Survives Loss | Profile |
|
||||||
|
|-----------|---------------|---------------|---------|
|
||||||
|
| 10% | 1 | 1 of 6 (16.7%) | Studio |
|
||||||
|
| 20% | 1 | 1 of 6 (16.7%) | Good |
|
||||||
|
| 50% | 3 | 3 of 8 (37.5%) | Degraded |
|
||||||
|
| 100% | 5 | 5 of 10 (50.0%) | Catastrophic |
|
||||||
|
|
||||||
## Why QUIC Over Raw UDP
|
### Interleaving
|
||||||
|
|
||||||
Raw UDP would be simpler and lower-latency, but QUIC (via the `quinn` crate) provides:
|
Burst loss protection via depth-3 interleaving: packets from 3 consecutive FEC blocks are interleaved before transmission. A burst of 3 consecutive lost packets affects 3 different blocks (1 loss each) rather than destroying 1 block entirely.
|
||||||
|
|
||||||
1. **DATAGRAM frames**: Unreliable delivery without head-of-line blocking (RFC 9221). Media packets use this path, so they behave like UDP datagrams but benefit from QUIC's connection management.
|
```mermaid
|
||||||
|
graph LR
|
||||||
|
subgraph "FEC Encoder"
|
||||||
|
F1[Frame 1] --> BLK[Source Block<br/>5-10 frames]
|
||||||
|
F2[Frame 2] --> BLK
|
||||||
|
F3[Frame 3] --> BLK
|
||||||
|
F4[Frame 4] --> BLK
|
||||||
|
F5[Frame 5] --> BLK
|
||||||
|
BLK --> SRC[Source Symbols]
|
||||||
|
BLK --> REP[Repair Symbols<br/>ratio-dependent]
|
||||||
|
SRC --> INT[Interleaver<br/>depth=3]
|
||||||
|
REP --> INT
|
||||||
|
end
|
||||||
|
|
||||||
2. **Reliable streams**: Signaling messages (CallOffer, CallAnswer, Rekey, Hangup) require reliable delivery. QUIC provides multiplexed streams without needing a separate TCP connection.
|
subgraph "Network"
|
||||||
|
INT --> LOSS{Packet Loss}
|
||||||
|
LOSS -->|some lost| RCV[Received Symbols]
|
||||||
|
end
|
||||||
|
|
||||||
3. **Built-in congestion control**: QUIC's congestion control prevents overwhelming degraded links, which is important when chaining relays.
|
subgraph "FEC Decoder"
|
||||||
|
RCV --> DEINT[De-interleaver]
|
||||||
|
DEINT --> RAPTORQ[RaptorQ Decode<br/>Any K of K+R]
|
||||||
|
RAPTORQ --> OUT[Original Frames]
|
||||||
|
end
|
||||||
|
|
||||||
4. **Connection migration**: QUIC connections survive IP address changes (e.g., WiFi to cellular handoff), which is valuable for mobile clients.
|
style LOSS fill:#e17055,color:#fff
|
||||||
|
style RAPTORQ fill:#00b894,color:#fff
|
||||||
5. **TLS 1.3 built-in**: The QUIC handshake provides encryption at the transport level. While WZP has its own end-to-end ChaCha20 layer, the QUIC TLS protects the header and signaling from eavesdroppers.
|
|
||||||
|
|
||||||
6. **NAT keepalive**: QUIC's built-in keep-alive (configured at 5-second intervals) maintains NAT bindings without application-level pings.
|
|
||||||
|
|
||||||
7. **Firewall traversal**: QUIC runs on UDP port 443 by default, which is commonly allowed through firewalls. The `wzp` ALPN protocol identifier distinguishes WZP traffic.
|
|
||||||
|
|
||||||
The tradeoff is approximately 20-40 bytes of additional per-packet overhead compared to raw UDP (QUIC short header + DATAGRAM frame overhead).
|
|
||||||
|
|
||||||
## Why ChaCha20-Poly1305 Over AES-GCM
|
|
||||||
|
|
||||||
1. **Software performance**: ChaCha20-Poly1305 is faster than AES-GCM on hardware without AES-NI instructions. This matters for ARM devices (Android phones, Raspberry Pi relays, embedded systems) where AES hardware acceleration may be absent.
|
|
||||||
|
|
||||||
2. **Constant-time by design**: ChaCha20 uses only add-rotate-XOR operations, making it inherently resistant to timing side-channel attacks. AES-GCM implementations without hardware support often require careful constant-time implementation.
|
|
||||||
|
|
||||||
3. **Warzone messenger compatibility**: The existing Warzone messenger uses ChaCha20-Poly1305 for message encryption. Reusing the same primitive simplifies the security audit and allows key material to be shared across messaging and calling.
|
|
||||||
|
|
||||||
4. **16-byte overhead**: Both ChaCha20-Poly1305 and AES-128-GCM produce a 16-byte authentication tag. There is no size advantage to AES-GCM.
|
|
||||||
|
|
||||||
5. **AEAD with AAD**: The MediaHeader is used as Associated Authenticated Data (AAD), ensuring the header is authenticated but not encrypted. This allows relays to read routing information (block ID, sequence number) without decrypting the payload.
|
|
||||||
|
|
||||||
## Why Star Dependency Graph (Parallel Development)
|
|
||||||
|
|
||||||
The workspace follows a strict star dependency pattern:
|
|
||||||
|
|
||||||
```
|
|
||||||
wzp-proto (hub)
|
|
||||||
/ | \ \
|
|
||||||
wzp-codec wzp-fec wzp-crypto wzp-transport
|
|
||||||
\ | / /
|
|
||||||
wzp-relay
|
|
||||||
wzp-client
|
|
||||||
wzp-web
|
|
||||||
```
|
```
|
||||||
|
|
||||||
- `wzp-proto` defines all trait interfaces and wire format types
|
## Transport Layer
|
||||||
- Each "leaf" crate (codec, fec, crypto, transport) depends only on `wzp-proto`
|
|
||||||
- No leaf crate depends on another leaf crate
|
|
||||||
- Integration crates (relay, client, web) depend on all leaves
|
|
||||||
|
|
||||||
This enables:
|
### Why QUIC Over Raw UDP
|
||||||
1. **Parallel development**: 5 agents/developers can work on 5 crates simultaneously with zero merge conflicts
|
|
||||||
2. **Independent testing**: Each crate has comprehensive tests that run without requiring other implementations
|
|
||||||
3. **Pluggability**: Any implementation can be swapped (e.g., replace RaptorQ with Reed-Solomon) by implementing the same trait
|
|
||||||
4. **Fast compilation**: Changes to one leaf only recompile that leaf and the integration crates, not other leaves
|
|
||||||
|
|
||||||
## Jitter Buffer Trade-offs
|
WarzonePhone uses QUIC (via the `quinn` crate) rather than raw UDP for several reasons:
|
||||||
|
|
||||||
The jitter buffer must balance two competing goals:
|
| Feature | Benefit |
|
||||||
|
|---------|---------|
|
||||||
|
| DATAGRAM frames (RFC 9221) | Unreliable delivery without head-of-line blocking -- behaves like UDP for media |
|
||||||
|
| Reliable streams | Multiplexed signaling (CallOffer, Hangup, Rekey) without a separate TCP connection |
|
||||||
|
| Congestion control | Prevents overwhelming degraded links, important when chaining relays |
|
||||||
|
| Connection migration | Connections survive IP address changes (WiFi to cellular handoff) |
|
||||||
|
| TLS 1.3 built-in | Transport-level encryption protects headers and signaling |
|
||||||
|
| NAT keepalive | 5-second interval maintains NAT bindings without application-level pings |
|
||||||
|
| Firewall traversal | Runs on UDP port 443 with `wzp` ALPN identifier |
|
||||||
|
|
||||||
**Lower latency** (smaller buffer):
|
The tradeoff is approximately 20-40 bytes of additional per-packet overhead compared to raw UDP.
|
||||||
- Better conversational interactivity
|
|
||||||
- Less memory usage
|
|
||||||
- But more vulnerable to jitter and reordering
|
|
||||||
|
|
||||||
**Higher quality** (larger buffer):
|
### Wire Formats
|
||||||
- More time to receive out-of-order packets
|
|
||||||
- More time for FEC recovery (repair packets may arrive after source packets)
|
|
||||||
- But adds perceptible delay to the conversation
|
|
||||||
|
|
||||||
The default configuration:
|
#### MediaHeader (12 bytes)
|
||||||
- Target: 10 packets (200ms) for the client, 50 packets (1s) for the relay
|
|
||||||
- Minimum: 3 packets (60ms) before playout begins (client), 25 packets (500ms) for relay
|
|
||||||
- Maximum: 250 packets (5s) absolute cap
|
|
||||||
|
|
||||||
The relay uses a deeper buffer because it needs to absorb jitter from the lossy inter-relay link. The client uses a shallower buffer for lower latency since it is on the last hop.
|
```
|
||||||
|
Byte 0: [V:1][T:1][CodecID:4][Q:1][FecRatioHi:1]
|
||||||
|
Byte 1: [FecRatioLo:6][unused:2]
|
||||||
|
Bytes 2-3: sequence (u16 BE)
|
||||||
|
Bytes 4-7: timestamp_ms (u32 BE)
|
||||||
|
Byte 8: fec_block_id (u8)
|
||||||
|
Byte 9: fec_symbol_idx (u8)
|
||||||
|
Byte 10: reserved
|
||||||
|
Byte 11: csrc_count
|
||||||
|
|
||||||
**Known issue**: The current jitter buffer does not adapt its depth based on observed jitter. It uses sequence-number ordering only, without timestamp-based playout scheduling. This can lead to drift during long calls, as observed in echo tests.
|
V = version (0), T = is_repair, CodecID = codec, Q = quality_report appended
|
||||||
|
```
|
||||||
|
|
||||||
## Browser Audio: AudioWorklet vs ScriptProcessorNode
|
#### MiniHeader (4 bytes, compressed)
|
||||||
|
|
||||||
The web bridge (`crates/wzp-web/static/`) uses AudioWorklet as the primary audio I/O mechanism, with ScriptProcessorNode as a fallback.
|
```
|
||||||
|
Bytes 0-1: timestamp_delta_ms (u16 BE)
|
||||||
|
Bytes 2-3: payload_len (u16 BE)
|
||||||
|
|
||||||
**AudioWorklet** (preferred):
|
Preceded by FRAME_TYPE_MINI (0x01). Full header every 50 frames (~1s).
|
||||||
- Runs on a dedicated audio rendering thread
|
Saves 8 bytes/packet (67% header reduction).
|
||||||
- Lower latency (no main-thread round-trip)
|
```
|
||||||
- Consistent 128-sample callback timing
|
|
||||||
- Supported in Chrome 66+, Firefox 76+, Safari 14.1+
|
|
||||||
|
|
||||||
**ScriptProcessorNode** (fallback):
|
#### TrunkFrame (batched datagrams)
|
||||||
- Runs on the main thread via `onaudioprocess` callback
|
|
||||||
- Higher latency, potential glitches from main-thread GC pauses
|
|
||||||
- Deprecated by the Web Audio specification
|
|
||||||
- Used when AudioWorklet is not available
|
|
||||||
|
|
||||||
Both paths accumulate Float32 samples into 960-sample (20ms) Int16 frames before sending via WebSocket, matching the WZP codec frame size.
|
```
|
||||||
|
[count:u16]
|
||||||
|
[session_id:2][len:u16][payload:len] x count
|
||||||
|
|
||||||
**Playback** uses an AudioWorklet with a ring buffer capped at 200ms (9600 samples at 48 kHz). When the buffer exceeds this limit, old samples are dropped to prevent unbounded drift. The fallback path uses scheduled `AudioBufferSourceNode` instances.
|
Packs multiple session packets into one QUIC datagram.
|
||||||
|
Max 10 entries or 1200 bytes, flushed every 5ms.
|
||||||
|
```
|
||||||
|
|
||||||
## Room Mode: SFU vs MCU Trade-offs
|
#### QualityReport (4 bytes, optional trailer)
|
||||||
|
|
||||||
WarzonePhone implements an **SFU** (Selective Forwarding Unit) architecture:
|
```
|
||||||
|
Byte 0: loss_pct (0-255 maps to 0-100%)
|
||||||
|
Byte 1: rtt_4ms (0-255 maps to 0-1020ms)
|
||||||
|
Byte 2: jitter_ms
|
||||||
|
Byte 3: bitrate_cap_kbps
|
||||||
|
```
|
||||||
|
|
||||||
**SFU** (implemented):
|
### Bandwidth Summary
|
||||||
- Relay forwards each participant's packets to all other participants unchanged
|
|
||||||
- No transcoding -- the relay never decodes or re-encodes audio
|
|
||||||
- O(N) bandwidth at the relay for N participants (each packet is sent N-1 times)
|
|
||||||
- Each client receives separate streams from each other participant
|
|
||||||
- Client must mix/decode multiple streams locally
|
|
||||||
- Lower relay CPU usage (no transcoding)
|
|
||||||
- End-to-end encryption is preserved (relay never sees plaintext)
|
|
||||||
|
|
||||||
**MCU** (not implemented, for comparison):
|
| Profile | Audio | FEC Overhead | Total | Silence Savings |
|
||||||
- Relay would decode all streams, mix them, and re-encode a single combined stream
|
|---------|-------|-------------|-------|----------------|
|
||||||
- O(1) bandwidth to each client (receives one mixed stream)
|
| Studio 64k | 64 kbps | 10% = 6.4 kbps | **70.4 kbps** | ~50% with DTX |
|
||||||
- Requires the relay to have codec keys (breaks E2E encryption)
|
| Studio 48k | 48 kbps | 10% = 4.8 kbps | **52.8 kbps** | ~50% with DTX |
|
||||||
- Higher relay CPU (decoding N streams + mixing + re-encoding)
|
| Studio 32k | 32 kbps | 10% = 3.2 kbps | **35.2 kbps** | ~50% with DTX |
|
||||||
- Audio quality loss from re-encoding
|
| Good (24k) | 24 kbps | 20% = 4.8 kbps | **28.8 kbps** | ~50% with DTX |
|
||||||
|
| Degraded (6k) | 6 kbps | 50% = 3.0 kbps | **9.0 kbps** | ~50% with DTX |
|
||||||
|
| Catastrophic (1.2k) | 1.2 kbps | 100% = 1.2 kbps | **2.4 kbps** | ~50% with DTX |
|
||||||
|
|
||||||
The SFU choice is driven by the E2E encryption requirement: since relays never have access to the audio codec keys, they cannot decode, mix, or re-encode. The current room implementation in `crates/wzp-relay/src/room.rs` forwards received datagrams to all other participants in the room with best-effort delivery -- if one send fails, the relay continues to the next participant.
|
Additional savings: MiniHeaders save 8 bytes/packet (67% header reduction). Trunking shares QUIC overhead across multiplexed sessions.
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
### Identity Model
|
||||||
|
|
||||||
|
Every user has a persistent identity derived from a 32-byte seed:
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
SEED["32-byte Seed<br/>(BIP39 Mnemonic: 24 words)"] --> HKDF1["HKDF<br/>info='warzone-ed25519'"]
|
||||||
|
SEED --> HKDF2["HKDF<br/>info='warzone-x25519'"]
|
||||||
|
|
||||||
|
HKDF1 --> ED["Ed25519 SigningKey<br/>(Digital Signatures)"]
|
||||||
|
HKDF2 --> X25519["X25519 StaticSecret<br/>(Key Agreement)"]
|
||||||
|
|
||||||
|
ED --> VKEY["Ed25519 VerifyingKey<br/>(Public)"]
|
||||||
|
X25519 --> XPUB["X25519 PublicKey<br/>(Public)"]
|
||||||
|
|
||||||
|
VKEY --> FP["Fingerprint<br/>SHA-256(pubkey), truncated 16 bytes<br/>xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx"]
|
||||||
|
|
||||||
|
style SEED fill:#6c5ce7,color:#fff
|
||||||
|
style FP fill:#fd79a8,color:#fff
|
||||||
|
style ED fill:#ee5a24,color:#fff
|
||||||
|
style X25519 fill:#00b894,color:#fff
|
||||||
|
```
|
||||||
|
|
||||||
|
**BIP39 Mnemonic Backup**: The 32-byte seed can be encoded as a 24-word BIP39 mnemonic for human-readable backup. The same seed produces the same identity on any platform.
|
||||||
|
|
||||||
|
**featherChat Compatibility**: The identity derivation is compatible with the Warzone messenger (featherChat), allowing a shared identity across messaging and calling.
|
||||||
|
|
||||||
|
### Cryptographic Handshake
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant C as Caller
|
||||||
|
participant R as Relay / Callee
|
||||||
|
|
||||||
|
Note over C: Derive identity from seed<br/>Ed25519 + X25519 via HKDF
|
||||||
|
|
||||||
|
C->>C: Generate ephemeral X25519 keypair
|
||||||
|
C->>C: Sign(ephemeral_pub || "call-offer")
|
||||||
|
C->>R: CallOffer { identity_pub, ephemeral_pub, signature, profiles }
|
||||||
|
|
||||||
|
R->>R: Verify Ed25519 signature
|
||||||
|
R->>R: Generate ephemeral X25519 keypair
|
||||||
|
R->>R: shared_secret = DH(eph_b, eph_a)
|
||||||
|
R->>R: session_key = HKDF(shared_secret, "warzone-session-key")
|
||||||
|
R->>R: Sign(ephemeral_pub || "call-answer")
|
||||||
|
R->>C: CallAnswer { identity_pub, ephemeral_pub, signature, profile }
|
||||||
|
|
||||||
|
C->>C: Verify signature
|
||||||
|
C->>C: shared_secret = DH(eph_a, eph_b)
|
||||||
|
C->>C: session_key = HKDF(shared_secret)
|
||||||
|
|
||||||
|
Note over C,R: Both have identical ChaCha20-Poly1305 session key
|
||||||
|
C->>R: Encrypted media (QUIC datagrams)
|
||||||
|
R->>C: Encrypted media (QUIC datagrams)
|
||||||
|
|
||||||
|
Note over C,R: Rekey every 65,536 packets<br/>New ephemeral DH + HKDF mix
|
||||||
|
```
|
||||||
|
|
||||||
|
### Encryption Details
|
||||||
|
|
||||||
|
| Component | Algorithm | Purpose |
|
||||||
|
|-----------|-----------|---------|
|
||||||
|
| Identity signing | Ed25519 | Authenticate handshake messages |
|
||||||
|
| Key agreement | X25519 (ephemeral) | Derive shared secret |
|
||||||
|
| Key derivation | HKDF-SHA256 | Derive session key from shared secret |
|
||||||
|
| Media encryption | ChaCha20-Poly1305 | Encrypt audio payloads (16-byte tag) |
|
||||||
|
| Nonce construction | Deterministic from sequence number | No nonce reuse, no state sync needed |
|
||||||
|
| Anti-replay | Sliding window (64-packet) | Reject duplicate/old packets |
|
||||||
|
| Forward secrecy | Rekey every 65,536 packets | New ephemeral DH + HKDF mix |
|
||||||
|
|
||||||
|
**Why ChaCha20-Poly1305 over AES-GCM**:
|
||||||
|
- Faster on hardware without AES-NI (ARM phones, Raspberry Pi relays)
|
||||||
|
- Inherently constant-time (add-rotate-XOR only)
|
||||||
|
- Compatible with Warzone messenger (featherChat)
|
||||||
|
- Same 16-byte authentication tag overhead as AES-GCM
|
||||||
|
|
||||||
|
**AEAD with AAD**: The MediaHeader is used as Associated Authenticated Data. The header is authenticated but not encrypted, allowing relays to read routing information (block ID, sequence number) without decrypting the payload.
|
||||||
|
|
||||||
|
### Trust on First Use (TOFU)
|
||||||
|
|
||||||
|
Clients remember the relay's TLS certificate fingerprint after first connection. If the fingerprint changes on a subsequent connection, the desktop client shows a "Server Key Changed" warning dialog. The relay derives its TLS certificate deterministically from its persisted identity seed, so the fingerprint is stable across restarts.
|
||||||
|
|
||||||
|
## Relay Architecture
|
||||||
|
|
||||||
|
### Room Mode (Default SFU)
|
||||||
|
|
||||||
|
In room mode, the relay acts as a Selective Forwarding Unit. Clients join named rooms via the QUIC SNI (Server Name Indication) field. The relay forwards each participant's encrypted packets to all other participants in the room without decoding or re-encoding.
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TB
|
||||||
|
subgraph "Room Mode (SFU)"
|
||||||
|
C1[Client 1] -->|"QUIC SNI=room-hash"| RM[Room Manager]
|
||||||
|
C2[Client 2] -->|"QUIC SNI=room-hash"| RM
|
||||||
|
C3[Client 3] -->|"QUIC SNI=room-hash"| RM
|
||||||
|
RM --> R1[Room 'podcast']
|
||||||
|
R1 -->|fan-out| C1
|
||||||
|
R1 -->|fan-out| C2
|
||||||
|
R1 -->|fan-out| C3
|
||||||
|
end
|
||||||
|
|
||||||
|
style RM fill:#ff9f43,color:#fff
|
||||||
|
style R1 fill:#fdcb6e
|
||||||
|
```
|
||||||
|
|
||||||
|
**SFU vs MCU trade-off**: SFU was chosen because it preserves end-to-end encryption (the relay never sees plaintext audio). An MCU would need to decode, mix, and re-encode, breaking E2E encryption. The trade-off is O(N) bandwidth at the relay for N participants.
|
||||||
|
|
||||||
|
### Forward Mode
|
||||||
|
|
||||||
|
With `--remote`, the relay forwards all traffic to a remote relay. Used for chaining relays across lossy or censored links:
|
||||||
|
|
||||||
|
```
|
||||||
|
Client --> Relay A (--remote B) --> Relay B --> Destination Client
|
||||||
|
```
|
||||||
|
|
||||||
|
The relay pipeline in forward mode: FEC decode, jitter buffer, then FEC re-encode for the next hop.
|
||||||
|
|
||||||
|
## Federation
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
Two or more relays form a federation mesh. Each relay is an independent SFU. When configured to trust each other, they bridge **global rooms** -- participants on relay A in a global room hear participants on relay B in the same room.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Federation uses three TOML configuration sections:
|
||||||
|
|
||||||
|
- `[[peers]]` -- outbound connections to peer relays (url + TLS fingerprint)
|
||||||
|
- `[[trusted]]` -- inbound connections accepted from relays (TLS fingerprint only)
|
||||||
|
- `[[global_rooms]]` -- room names to bridge across all federated peers
|
||||||
|
|
||||||
|
### Federation Topology
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TB
|
||||||
|
subgraph "Relay A (EU)"
|
||||||
|
A_RM[Room Manager]
|
||||||
|
A_FM[Federation Manager]
|
||||||
|
A1[Alice - local]
|
||||||
|
A2[Bob - local]
|
||||||
|
A_RM --> A_FM
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph "Relay B (US)"
|
||||||
|
B_RM[Room Manager]
|
||||||
|
B_FM[Federation Manager]
|
||||||
|
B1[Charlie - local]
|
||||||
|
B_RM --> B_FM
|
||||||
|
end
|
||||||
|
|
||||||
|
A_FM <-->|"QUIC SNI='_federation'<br/>GlobalRoomActive/Inactive<br/>Media forwarding"| B_FM
|
||||||
|
|
||||||
|
A1 -->|media| A_RM
|
||||||
|
A2 -->|media| A_RM
|
||||||
|
B1 -->|media| B_RM
|
||||||
|
|
||||||
|
A_RM -->|"federated fan-out"| A1
|
||||||
|
A_RM -->|"federated fan-out"| A2
|
||||||
|
B_RM -->|"federated fan-out"| B1
|
||||||
|
|
||||||
|
style A_FM fill:#6c5ce7,color:#fff
|
||||||
|
style B_FM fill:#6c5ce7,color:#fff
|
||||||
|
style A_RM fill:#ff9f43,color:#fff
|
||||||
|
style B_RM fill:#ff9f43,color:#fff
|
||||||
|
```
|
||||||
|
|
||||||
|
### Protocol
|
||||||
|
|
||||||
|
1. On startup, each relay connects to all configured `[[peers]]` via QUIC with SNI `"_federation"`
|
||||||
|
2. After QUIC handshake, sends `FederationHello { tls_fingerprint }` for identity verification
|
||||||
|
3. Peer verifies the fingerprint against its `[[trusted]]` or `[[peers]]` list
|
||||||
|
4. When a local participant joins a global room, sends `GlobalRoomActive { room }` to all peers
|
||||||
|
5. When the last local participant leaves, sends `GlobalRoomInactive { room }`
|
||||||
|
6. Media is forwarded as `[room_hash:8][original_media_packet]` -- the relay does not decrypt
|
||||||
|
|
||||||
|
### What Relays Do NOT Do
|
||||||
|
|
||||||
|
- **No transcoding** -- media passes through as-is
|
||||||
|
- **No re-encryption** -- packets are already encrypted E2E
|
||||||
|
- **No central coordinator** -- each relay independently connects to configured peers
|
||||||
|
- **No automatic peer discovery** -- peers must be explicitly configured
|
||||||
|
|
||||||
|
### Failure Handling
|
||||||
|
|
||||||
|
- If a peer goes down, local rooms continue working; federated participants disappear from presence
|
||||||
|
- Reconnection: every 30 seconds with exponential backoff up to 5 minutes
|
||||||
|
- If a peer restarts with a different identity, the fingerprint check fails with a clear log message
|
||||||
|
|
||||||
|
## Jitter Buffer
|
||||||
|
|
||||||
|
The jitter buffer balances latency vs quality:
|
||||||
|
|
||||||
|
| Setting | Client | Relay |
|
||||||
|
|---------|--------|-------|
|
||||||
|
| Target depth | 10 packets (200ms) | 50 packets (1s) |
|
||||||
|
| Minimum before playout | 3 packets (60ms) | 25 packets (500ms) |
|
||||||
|
| Maximum cap | 250 packets (5s) | 250 packets (5s) |
|
||||||
|
|
||||||
|
The relay uses a deeper buffer to absorb jitter from lossy inter-relay links. The client uses a shallower buffer for lower latency.
|
||||||
|
|
||||||
|
The adaptive playout delay tracks jitter via exponential moving average and adjusts the target depth:
|
||||||
|
|
||||||
|
```
|
||||||
|
target_delay = ceil(jitter_ema / 20ms) + 2
|
||||||
|
```
|
||||||
|
|
||||||
|
**Known limitation**: The current jitter buffer does not use timestamp-based playout scheduling. It relies on sequence-number ordering only, which can lead to drift during long calls.
|
||||||
|
|
||||||
|
## Signal Messages
|
||||||
|
|
||||||
|
Signal messages are sent over reliable QUIC streams as length-prefixed JSON:
|
||||||
|
|
||||||
|
```
|
||||||
|
[4-byte length prefix][serde_json payload]
|
||||||
|
```
|
||||||
|
|
||||||
|
| Message | Purpose |
|
||||||
|
|---------|---------|
|
||||||
|
| `CallOffer` | Identity, ephemeral key, signature, supported profiles |
|
||||||
|
| `CallAnswer` | Identity, ephemeral key, signature, chosen profile |
|
||||||
|
| `AuthToken` | featherChat bearer token for relay authentication |
|
||||||
|
| `Hangup` | Reason: Normal, Busy, Declined, Timeout, Error |
|
||||||
|
| `Hold` / `Unhold` | Call hold state |
|
||||||
|
| `Mute` / `Unmute` | Mic mute state |
|
||||||
|
| `Transfer` | Call transfer to another relay/fingerprint |
|
||||||
|
| `Rekey` | New ephemeral key for forward secrecy |
|
||||||
|
| `QualityUpdate` | Quality report + recommended profile |
|
||||||
|
| `Ping` / `Pong` | Latency measurement (timestamp_ms) |
|
||||||
|
| `RoomUpdate` | Participant list changes |
|
||||||
|
| `PresenceUpdate` | Federation presence gossip |
|
||||||
|
| `RouteQuery` / `RouteResponse` | Presence discovery for routing |
|
||||||
|
| `FederationHello` | Relay identity during federation setup |
|
||||||
|
| `GlobalRoomActive` / `GlobalRoomInactive` | Federation room bridging |
|
||||||
|
|
||||||
|
## Test Coverage
|
||||||
|
|
||||||
|
272 tests across all crates, 0 failures:
|
||||||
|
|
||||||
|
| Crate | Tests | Key Coverage |
|
||||||
|
|-------|-------|-------------|
|
||||||
|
| wzp-proto | 41 | Wire format, jitter buffer, quality tiers, mini-frames, trunking |
|
||||||
|
| wzp-codec | 31 | Opus/Codec2 roundtrip, silence detection, noise suppression |
|
||||||
|
| wzp-fec | 22 | RaptorQ encode/decode, loss recovery, interleaving |
|
||||||
|
| wzp-crypto | 34 + 28 compat | Encrypt/decrypt, handshake, anti-replay, featherChat identity |
|
||||||
|
| wzp-transport | 2 | QUIC connection setup |
|
||||||
|
| wzp-relay | 40 + 4 integration | Room ACL, session mgmt, metrics, probes, mesh, trunking |
|
||||||
|
| wzp-client | 30 + 2 integration | Encoder/decoder, quality adapter, silence, drift, sweep |
|
||||||
|
| wzp-web | 2 | Metrics |
|
||||||
|
|
||||||
|
## Build Requirements
|
||||||
|
|
||||||
|
- **Rust** 1.85+ (2024 edition)
|
||||||
|
- **Linux**: cmake, pkg-config, libasound2-dev (for audio feature)
|
||||||
|
- **macOS**: Xcode command line tools (CoreAudio included)
|
||||||
|
- **Android**: NDK r27c, cmake 3.28+ (from pip)
|
||||||
|
|||||||
201
docs/PRD-adaptive-quality.md
Normal file
201
docs/PRD-adaptive-quality.md
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
# PRD: Adaptive Quality Control (Auto Codec)
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
When a user selects "Auto" quality, the system currently just starts at Opus 24k (GOOD) and never changes. There is no runtime adaptation — if the network degrades mid-call, audio breaks up instead of gracefully stepping down to a lower bitrate codec. Conversely, if the network is excellent, the user stays on 24k when they could have studio-quality 64k.
|
||||||
|
|
||||||
|
The relay already sends `QualityReport` messages with loss % and RTT, and a `QualityAdapter` exists in `call.rs` that classifies network conditions into GOOD/DEGRADED/CATASTROPHIC — but none of this is wired into the Android or desktop engines.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Wire the existing `QualityAdapter` into both engines so that "Auto" mode continuously monitors network quality and switches codecs mid-call. The full quality range should be used:
|
||||||
|
|
||||||
|
```
|
||||||
|
Excellent network → Studio 64k (best quality)
|
||||||
|
Good network → Opus 24k (default)
|
||||||
|
Degraded network → Opus 6k (lower bitrate, more FEC)
|
||||||
|
Poor network → Codec2 3.2k (vocoder, heavy FEC)
|
||||||
|
Catastrophic → Codec2 1.2k (minimum viable voice)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────┐
|
||||||
|
Relay ──────────► │ QualityReport │ loss %, RTT, jitter
|
||||||
|
│ (every ~1s) │
|
||||||
|
└────────┬────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────┐
|
||||||
|
│ QualityAdapter │ classify + hysteresis
|
||||||
|
│ (3-report window) │
|
||||||
|
└────────┬────────────┘
|
||||||
|
│ recommend new profile
|
||||||
|
▼
|
||||||
|
┌──────────────┴──────────────┐
|
||||||
|
│ │
|
||||||
|
▼ ▼
|
||||||
|
┌────────────────┐ ┌────────────────┐
|
||||||
|
│ Encoder │ │ Decoder │
|
||||||
|
│ set_profile() │ │ (auto-switch │
|
||||||
|
│ + FEC update │ │ already works)│
|
||||||
|
└────────────────┘ └────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Existing Infrastructure
|
||||||
|
|
||||||
|
### What already exists (in `crates/wzp-client/src/call.rs`)
|
||||||
|
|
||||||
|
1. **`QualityAdapter`** (lines 97-196):
|
||||||
|
- Sliding window of `QualityReport` messages
|
||||||
|
- `classify()`: loss > 15% or RTT > 200ms → CATASTROPHIC, loss > 5% or RTT > 100ms → DEGRADED, else → GOOD
|
||||||
|
- `should_switch()`: hysteresis — requires 3 consecutive reports recommending the same profile before switching
|
||||||
|
- Prevents oscillation between profiles
|
||||||
|
|
||||||
|
2. **`QualityReport`** (in `wzp-proto/src/packet.rs`):
|
||||||
|
- Sent by relay piggy-backed on media packets
|
||||||
|
- Fields: `loss_pct` (u8, 0-255 scaled), `rtt_4ms` (u8, RTT in 4ms units), `jitter_ms`, `bitrate_cap_kbps`
|
||||||
|
|
||||||
|
3. **`CallEncoder::set_profile()`** / **`CallDecoder` auto-switch**:
|
||||||
|
- Encoder can switch codec mid-stream
|
||||||
|
- Decoder already auto-detects incoming codec from packet headers
|
||||||
|
|
||||||
|
### What's missing
|
||||||
|
|
||||||
|
1. **QualityReport ingestion** — neither Android engine nor desktop engine reads quality reports from the relay
|
||||||
|
2. **Profile switch loop** — no periodic check that feeds reports to `QualityAdapter` and applies recommended switches
|
||||||
|
3. **Upward adaptation** — `QualityAdapter` only classifies into 3 tiers (GOOD/DEGRADED/CATASTROPHIC). Needs extension to recommend studio tiers when conditions are excellent (loss < 1%, RTT < 50ms)
|
||||||
|
4. **Notification to UI** — when quality changes, the UI should show the current active codec
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Phase 1: Basic Adaptive (3-tier)
|
||||||
|
|
||||||
|
**Both Android and Desktop:**
|
||||||
|
|
||||||
|
1. **Ingest QualityReports**: In the recv loop, extract `quality_report` from incoming `MediaPacket`s when present. Feed to `QualityAdapter`.
|
||||||
|
|
||||||
|
2. **Periodic quality check**: Every 1 second (or on each QualityReport), call `adapter.should_switch(¤t_profile)`. If it returns `Some(new_profile)`:
|
||||||
|
- Switch the encoder: `encoder.set_profile(new_profile)`
|
||||||
|
- Update FEC encoder: `fec_enc = create_encoder(&new_profile)`
|
||||||
|
- Update frame size if changed (e.g., 20ms → 40ms)
|
||||||
|
- Log the switch
|
||||||
|
|
||||||
|
3. **Frame size adaptation on switch**: When switching from 20ms to 40ms frames (or vice versa):
|
||||||
|
- Android: update `frame_samples` variable, resize `capture_buf`
|
||||||
|
- Desktop: same — the send loop reads `frame_samples` dynamically
|
||||||
|
|
||||||
|
4. **UI indicator**: Show current active codec in the call screen stats line.
|
||||||
|
- Android: add to `CallStats` and display in stats text
|
||||||
|
- Desktop: add to `get_status` response and display in stats div
|
||||||
|
|
||||||
|
5. **Only in Auto mode**: Adaptive switching should only happen when the user selected "Auto". If they manually selected a profile, respect their choice.
|
||||||
|
|
||||||
|
### Phase 2: Extended Range (5-tier)
|
||||||
|
|
||||||
|
Extend `QualityAdapter::classify()` to use the full codec range:
|
||||||
|
|
||||||
|
| Condition | Profile | Codec |
|
||||||
|
|-----------|---------|-------|
|
||||||
|
| loss < 1% AND RTT < 30ms | STUDIO_64K | Opus 64k |
|
||||||
|
| loss < 1% AND RTT < 50ms | STUDIO_48K | Opus 48k |
|
||||||
|
| loss < 2% AND RTT < 80ms | STUDIO_32K | Opus 32k |
|
||||||
|
| loss < 5% AND RTT < 100ms | GOOD | Opus 24k |
|
||||||
|
| loss < 15% AND RTT < 200ms | DEGRADED | Opus 6k |
|
||||||
|
| loss >= 15% OR RTT >= 200ms | CATASTROPHIC | Codec2 1.2k |
|
||||||
|
|
||||||
|
With hysteresis:
|
||||||
|
- **Downgrade**: 3 consecutive reports (fast reaction to degradation)
|
||||||
|
- **Upgrade**: 5 consecutive reports (slow, cautious improvement)
|
||||||
|
- **Studio upgrade**: 10 consecutive reports (very conservative — avoid bouncing to 64k on brief good patches)
|
||||||
|
|
||||||
|
### Phase 3: Bandwidth Probing
|
||||||
|
|
||||||
|
Rather than relying solely on loss/RTT:
|
||||||
|
1. Start at GOOD
|
||||||
|
2. After 10 seconds of stable call, probe upward by switching to STUDIO_32K
|
||||||
|
3. If no quality degradation after 5 seconds, probe to STUDIO_48K
|
||||||
|
4. If degradation detected, immediately fall back
|
||||||
|
5. This discovers the true available bandwidth rather than guessing from loss stats
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
### Android (`crates/wzp-android/src/engine.rs`)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// In the recv loop, after decoding:
|
||||||
|
if let Some(ref qr) = pkt.quality_report {
|
||||||
|
quality_adapter.ingest(qr);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Periodic check (every 50 frames ≈ 1 second):
|
||||||
|
if auto_profile && frames_decoded % 50 == 0 {
|
||||||
|
if let Some(new_profile) = quality_adapter.should_switch(¤t_profile) {
|
||||||
|
info!(from = ?current_profile.codec, to = ?new_profile.codec, "auto: switching quality");
|
||||||
|
let _ = encoder_ref.lock().set_profile(new_profile);
|
||||||
|
fec_enc_ref.lock() = create_encoder(&new_profile);
|
||||||
|
current_profile = new_profile;
|
||||||
|
frame_samples = frame_samples_for(&new_profile);
|
||||||
|
// Resize capture buffer if needed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Challenge**: The encoder is in the send task and the quality reports arrive in the recv task. Need shared state (AtomicU8 for profile index, or a channel).
|
||||||
|
|
||||||
|
**Recommended approach**: Use an `AtomicU8` that the recv task writes and the send task reads:
|
||||||
|
```rust
|
||||||
|
let pending_profile = Arc::new(AtomicU8::new(0xFF)); // 0xFF = no change
|
||||||
|
|
||||||
|
// Recv task: when adapter recommends switch
|
||||||
|
pending_profile.store(new_profile_index, Ordering::Release);
|
||||||
|
|
||||||
|
// Send task: check at frame boundary
|
||||||
|
let p = pending_profile.swap(0xFF, Ordering::Acquire);
|
||||||
|
if p != 0xFF { /* apply switch */ }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Desktop (`desktop/src-tauri/src/engine.rs`)
|
||||||
|
|
||||||
|
Same pattern. The desktop engine already has separate send/recv tasks with shared atomics for mic_muted, etc. Add a `pending_profile: Arc<AtomicU8>` following the same pattern.
|
||||||
|
|
||||||
|
### Desktop CLI (`crates/wzp-client/src/call.rs`)
|
||||||
|
|
||||||
|
The `CallEncoder` already has `set_profile()`. The `CallDecoder` already auto-switches. Just need to:
|
||||||
|
1. Add `QualityAdapter` to `CallDecoder`
|
||||||
|
2. Feed quality reports in `ingest()`
|
||||||
|
3. Check `should_switch()` in `decode_next()`
|
||||||
|
4. Emit the recommendation via a callback or return value
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
1. **Local test with tc/netem**: Use Linux traffic control to simulate loss/latency:
|
||||||
|
```bash
|
||||||
|
# Simulate 10% loss, 150ms RTT
|
||||||
|
tc qdisc add dev lo root netem loss 10% delay 75ms
|
||||||
|
# Run 2 clients in auto mode, verify they switch to DEGRADED
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **CLI test**: Run `wzp-client --profile auto` between two instances with simulated network conditions
|
||||||
|
|
||||||
|
3. **Relay quality reports**: Verify the relay actually sends QualityReport messages. If it doesn't yet, that needs to be implemented first (check relay code).
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
1. **Does the relay currently send QualityReports?** If not, Phase 1 is blocked until the relay implements per-client loss/RTT tracking and report generation. The relay sees all packets and can compute loss % per sender.
|
||||||
|
|
||||||
|
2. **Codec2 3.2k placement**: Should auto mode use Codec2 3.2k between DEGRADED and CATASTROPHIC? It's 20ms frames (lower latency than Opus 6k's 40ms) but speech-only quality.
|
||||||
|
|
||||||
|
3. **Cross-client adaptation**: If client A is on GOOD and client B auto-adapts to CATASTROPHIC, client A still sends Opus 24k. Client B can decode it fine (auto-switch on recv). But should A also be told to lower quality to save B's bandwidth? This requires signaling between clients.
|
||||||
|
|
||||||
|
## Milestones
|
||||||
|
|
||||||
|
| Phase | Scope | Effort | Dependency |
|
||||||
|
|-------|-------|--------|------------|
|
||||||
|
| 0 | Verify relay sends QualityReports | 0.5 day | None |
|
||||||
|
| 1a | Wire QualityAdapter in Android engine | 1 day | Phase 0 |
|
||||||
|
| 1b | Wire QualityAdapter in desktop engine | 1 day | Phase 0 |
|
||||||
|
| 1c | UI indicator (current codec) | 0.5 day | Phase 1a/1b |
|
||||||
|
| 2 | Extended 5-tier classification | 0.5 day | Phase 1 |
|
||||||
|
| 3 | Bandwidth probing | 2 days | Phase 2 |
|
||||||
198
docs/PRD-coordinated-codec.md
Normal file
198
docs/PRD-coordinated-codec.md
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
# PRD: Coordinated Codec Switching (Relay-Judged Quality)
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
The current adaptive quality system (`QualityAdapter` in call.rs) exists but isn't wired into either engine. Clients encode at a fixed quality chosen at call start. When network conditions change mid-call, audio degrades instead of gracefully stepping down. When conditions improve, clients stay on low quality unnecessarily.
|
||||||
|
|
||||||
|
Additionally, in SFU mode with multiple participants, uncoordinated codec switching creates asymmetry: if client A upgrades to 64k while B stays on 24k, bandwidth is wasted. Participants should switch together.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
The **relay acts as the quality judge** since it sees both sides of every connection. It monitors packet loss, jitter, and RTT per participant, then signals quality recommendations. Clients react to these signals with coordinated codec switches.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||||
|
│ Client A │◄──────►│ Relay │◄──────►│ Client B │
|
||||||
|
│ │ │ (judge) │ │ │
|
||||||
|
│ Encoder │ │ │ │ Encoder │
|
||||||
|
│ Decoder │ │ Monitor │ │ Decoder │
|
||||||
|
└─────────┘ │ per-peer│ └─────────┘
|
||||||
|
│ quality │
|
||||||
|
└────┬────┘
|
||||||
|
│
|
||||||
|
Quality Signals:
|
||||||
|
- StableSignal (conditions good)
|
||||||
|
- DegradeSignal (conditions bad)
|
||||||
|
- UpgradeProposal (try higher quality?)
|
||||||
|
- UpgradeConfirm (all agreed, switch at T)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Classification (Relay-Side)
|
||||||
|
|
||||||
|
The relay monitors each participant's connection quality:
|
||||||
|
|
||||||
|
| Condition | Classification | Action |
|
||||||
|
|-----------|---------------|--------|
|
||||||
|
| loss >= 15% OR RTT >= 200ms | Critical | Immediate downgrade signal |
|
||||||
|
| loss >= 5% OR RTT >= 100ms | Degraded | Downgrade signal after 3 reports |
|
||||||
|
| loss < 2% AND RTT < 80ms | Good | Stable signal |
|
||||||
|
| loss < 1% AND RTT < 50ms for 30s | Excellent | Upgrade proposal |
|
||||||
|
| loss < 0.5% AND RTT < 30ms for 60s | Studio | Studio upgrade proposal |
|
||||||
|
|
||||||
|
## Coordinated Switching Protocol
|
||||||
|
|
||||||
|
### Downgrade (fast, safety-first)
|
||||||
|
|
||||||
|
1. Relay detects degradation for ANY participant
|
||||||
|
2. Relay sends `QualityUpdate { recommended_profile: DEGRADED }` to ALL participants
|
||||||
|
3. ALL participants immediately switch encoder to the recommended profile
|
||||||
|
4. No negotiation — downgrade is mandatory and instant
|
||||||
|
|
||||||
|
### Upgrade (slow, consensual)
|
||||||
|
|
||||||
|
1. Relay detects sustained good conditions for ALL participants (threshold: 30s stable)
|
||||||
|
2. Relay sends `UpgradeProposal { target_profile, switch_timestamp }` to all
|
||||||
|
3. Each client responds: `UpgradeAccept` or `UpgradeReject`
|
||||||
|
4. If ALL accept within 5s → Relay sends `UpgradeConfirm { profile, switch_at_ms }`
|
||||||
|
5. All clients switch encoder at the agreed timestamp (relative to session clock)
|
||||||
|
6. If ANY rejects or times out → upgrade cancelled, stay on current profile
|
||||||
|
|
||||||
|
### Asymmetric Encoding (SFU optimization)
|
||||||
|
|
||||||
|
In SFU mode, each client encodes independently. The relay could allow:
|
||||||
|
- Client A (strong connection): encode at 64k
|
||||||
|
- Client B (weak connection): encode at 6k
|
||||||
|
- Relay forwards A's 64k to B's decoder (auto-switch handles it)
|
||||||
|
- B benefits from A's quality without needing to send at 64k
|
||||||
|
|
||||||
|
This requires NO protocol changes — just each client independently following the relay's recommendation for their own encoding quality. The decoder already handles any codec.
|
||||||
|
|
||||||
|
### Split Network Consideration
|
||||||
|
|
||||||
|
If participant A has great quality but participant C has terrible quality:
|
||||||
|
- Option 1: **Match weakest link** — everyone encodes at C's level (current approach, simple)
|
||||||
|
- Option 2: **Per-participant recommendations** — A encodes at 64k, C encodes at 6k. B (good connection) receives and decodes both. Works because decoders auto-switch per packet.
|
||||||
|
- Option 3: **Relay transcoding** — relay re-encodes A's 64k as 6k for C. Adds CPU on relay, but saves bandwidth for C. Future feature.
|
||||||
|
|
||||||
|
Recommended: start with Option 1 (match weakest), add Option 2 later.
|
||||||
|
|
||||||
|
## Signal Messages (New/Modified)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
/// Quality signal from relay to client
|
||||||
|
QualityDirective {
|
||||||
|
/// Recommended profile to use for encoding
|
||||||
|
recommended_profile: QualityProfile,
|
||||||
|
/// Reason for the recommendation
|
||||||
|
reason: QualityReason,
|
||||||
|
}
|
||||||
|
|
||||||
|
enum QualityReason {
|
||||||
|
/// Network conditions require this quality level
|
||||||
|
NetworkCondition,
|
||||||
|
/// Coordinated upgrade — all participants agreed
|
||||||
|
CoordinatedUpgrade,
|
||||||
|
/// Coordinated downgrade — weakest link determines level
|
||||||
|
CoordinatedDowngrade,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Upgrade proposal from relay
|
||||||
|
UpgradeProposal {
|
||||||
|
target_profile: QualityProfile,
|
||||||
|
/// Milliseconds from now when the switch would happen
|
||||||
|
switch_delay_ms: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Client response to upgrade proposal
|
||||||
|
UpgradeResponse {
|
||||||
|
accepted: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Confirmed upgrade — all clients switch at this time
|
||||||
|
UpgradeConfirm {
|
||||||
|
profile: QualityProfile,
|
||||||
|
/// Session-relative timestamp to switch (ms since call start)
|
||||||
|
switch_at_session_ms: u64,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Relay-Side Implementation
|
||||||
|
|
||||||
|
### Per-Participant Quality Tracking
|
||||||
|
|
||||||
|
```rust
|
||||||
|
struct ParticipantQuality {
|
||||||
|
/// Sliding window of recent observations
|
||||||
|
loss_samples: VecDeque<f32>, // last 30 seconds
|
||||||
|
rtt_samples: VecDeque<u32>, // last 30 seconds
|
||||||
|
jitter_samples: VecDeque<u32>,
|
||||||
|
/// Current classification
|
||||||
|
classification: QualityClass,
|
||||||
|
/// How long current classification has been stable
|
||||||
|
stable_since: Instant,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quality Monitor Task (on relay)
|
||||||
|
|
||||||
|
Runs alongside the SFU forwarding loop:
|
||||||
|
1. Every 1 second, compute per-participant quality from QUIC connection stats
|
||||||
|
2. Classify each participant
|
||||||
|
3. If ANY participant degrades → send downgrade to ALL
|
||||||
|
4. If ALL participants stable for threshold → propose upgrade
|
||||||
|
5. Track upgrade negotiation state
|
||||||
|
|
||||||
|
### Integration with Existing Code
|
||||||
|
|
||||||
|
The relay already has access to:
|
||||||
|
- `QuinnTransport::path_quality()` → loss, RTT, jitter, bandwidth estimates
|
||||||
|
- `QualityReport` embedded in media packet headers
|
||||||
|
- Per-session metrics in `RelayMetrics`
|
||||||
|
|
||||||
|
The quality monitor just needs to read these existing metrics and produce signals.
|
||||||
|
|
||||||
|
## Client-Side Implementation
|
||||||
|
|
||||||
|
### Handling Quality Signals
|
||||||
|
|
||||||
|
In the recv loop (both Android engine and desktop engine):
|
||||||
|
```rust
|
||||||
|
SignalMessage::QualityDirective { recommended_profile, .. } => {
|
||||||
|
// Immediate: switch encoder to recommended profile
|
||||||
|
encoder.set_profile(recommended_profile)?;
|
||||||
|
fec_enc = create_encoder(&recommended_profile);
|
||||||
|
frame_samples = frame_samples_for(&recommended_profile);
|
||||||
|
info!(codec = ?recommended_profile.codec, "quality directive: switched");
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### P2P Quality (simpler case)
|
||||||
|
|
||||||
|
For P2P calls (no relay), both clients directly observe quality:
|
||||||
|
1. Each client runs its own `QualityAdapter` on the direct connection
|
||||||
|
2. When quality changes, client proposes to peer via signal
|
||||||
|
3. Simpler negotiation: only 2 parties, no relay middleman
|
||||||
|
4. Same coordinated switching logic, just peer-to-peer signals
|
||||||
|
|
||||||
|
## Backporting P2P → Relay
|
||||||
|
|
||||||
|
The quality monitoring and codec switching logic is identical:
|
||||||
|
- **P2P**: client observes quality directly → proposes switch to peer
|
||||||
|
- **Relay**: relay observes quality → proposes switch to all clients
|
||||||
|
|
||||||
|
The only difference is WHO makes the decision (client vs relay) and HOW many participants need to agree (2 vs N).
|
||||||
|
|
||||||
|
Implementation strategy: build for P2P first (simpler, 2 parties), then wrap the same logic with relay-mediated signals for SFU mode.
|
||||||
|
|
||||||
|
## Milestones
|
||||||
|
|
||||||
|
| Phase | Scope | Effort |
|
||||||
|
|-------|-------|--------|
|
||||||
|
| 1 | Relay-side quality monitor (per-participant tracking) | 1 day |
|
||||||
|
| 2 | Downgrade signal (immediate, match weakest) | 1 day |
|
||||||
|
| 3 | Client handling of QualityDirective | 1 day (both engines) |
|
||||||
|
| 4 | Upgrade proposal + negotiation protocol | 2 days |
|
||||||
|
| 5 | P2P quality adaptation (direct observation) | 1 day |
|
||||||
|
| 6 | Per-participant asymmetric encoding (Option 2) | 1 day |
|
||||||
170
docs/PRD-delegated-trust.md
Normal file
170
docs/PRD-delegated-trust.md
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
# PRD: Delegated Trust for Relay Federation
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
In the current federation model, when Relay 1 trusts Relay 2, and Relay 2 forwards media from Relay 3, Relay 1 has no way to know or control that Relay 3's traffic is reaching it. This is a trust gap — any relay in the chain can introduce untrusted traffic.
|
||||||
|
|
||||||
|
**Example:** Relay 1 (trusted zone) ←→ Relay 2 (hub) ←→ Relay 3 (unknown)
|
||||||
|
|
||||||
|
Relay 1 explicitly trusts Relay 2. But Relay 2 forwards Relay 3's media to Relay 1 without Relay 1's consent. Relay 1 receives media that originated from an entity it never approved.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Add a `delegate` flag to `[[trusted]]` entries. When `delegate = true`, the relay accepts media forwarded through the trusted peer from relays that the trusted peer vouches for. When `delegate = false` (default), only media originating from explicitly trusted/peered relays is accepted.
|
||||||
|
|
||||||
|
## Trust Levels
|
||||||
|
|
||||||
|
| Config | Meaning |
|
||||||
|
|--------|---------|
|
||||||
|
| `[[peers]]` | "I connect to you and trust your identity" |
|
||||||
|
| `[[trusted]]` | "I accept connections from you" |
|
||||||
|
| `[[trusted]] delegate = true` | "I accept connections from you AND from relays you vouch for" |
|
||||||
|
| No entry | "I reject your connections and drop your forwarded media" |
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Relay 1: trusts Relay 2 and delegates trust
|
||||||
|
[[trusted]]
|
||||||
|
fingerprint = "relay-2-tls-fingerprint"
|
||||||
|
label = "Relay 2 (Hub)"
|
||||||
|
delegate = true # Accept relays that Relay 2 forwards from
|
||||||
|
|
||||||
|
# Without delegate (default = false):
|
||||||
|
[[trusted]]
|
||||||
|
fingerprint = "relay-4-tls-fingerprint"
|
||||||
|
label = "Relay 4"
|
||||||
|
# delegate = false (implicit default)
|
||||||
|
# Only direct media from Relay 4 is accepted
|
||||||
|
```
|
||||||
|
|
||||||
|
## Protocol Changes
|
||||||
|
|
||||||
|
### Relay-to-Relay Media Authorization
|
||||||
|
|
||||||
|
When Relay 2 forwards media from Relay 3 to Relay 1, the datagram needs to carry origin information so Relay 1 can decide whether to accept it.
|
||||||
|
|
||||||
|
**Option A: Origin tag in datagram** (recommended)
|
||||||
|
|
||||||
|
Extend the federation datagram format:
|
||||||
|
```
|
||||||
|
[room_hash: 8 bytes][origin_relay_fp: 8 bytes][media_packet]
|
||||||
|
```
|
||||||
|
|
||||||
|
The 8-byte origin fingerprint identifies which relay originally produced the media. The forwarding relay (Relay 2) sets this to the source relay's fingerprint. Relay 1 checks:
|
||||||
|
1. Is the origin relay directly trusted? → accept
|
||||||
|
2. Is the forwarding relay trusted with `delegate = true`? → accept
|
||||||
|
3. Otherwise → drop
|
||||||
|
|
||||||
|
**Option B: Trust announcement signal**
|
||||||
|
|
||||||
|
When Relay 2 connects to Relay 1, it sends a `FederationTrustChain` signal listing which relays it will forward from:
|
||||||
|
```rust
|
||||||
|
FederationTrustChain {
|
||||||
|
/// Fingerprints of relays this peer may forward media from
|
||||||
|
vouched_relays: Vec<String>,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Relay 1 checks each fingerprint against its policy:
|
||||||
|
- If Relay 2 has `delegate = true` in Relay 1's config → accept all listed relays
|
||||||
|
- If Relay 2 has `delegate = false` → reject, only accept direct media from Relay 2
|
||||||
|
|
||||||
|
Option B is simpler to implement (no datagram format change) but less granular.
|
||||||
|
|
||||||
|
### Recommended: Option B for v1, Option A for v2
|
||||||
|
|
||||||
|
Option B is simpler — the trust chain is established at connection time, not per-datagram. The forwarding relay announces what it will forward, and the receiving relay approves or rejects upfront.
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Config Changes
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
pub struct TrustedConfig {
|
||||||
|
pub fingerprint: String,
|
||||||
|
#[serde(default)]
|
||||||
|
pub label: Option<String>,
|
||||||
|
/// When true, also accept media forwarded through this relay from
|
||||||
|
/// relays it vouches for. Default: false.
|
||||||
|
#[serde(default)]
|
||||||
|
pub delegate: bool,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Federation Signal
|
||||||
|
|
||||||
|
```rust
|
||||||
|
/// Sent after FederationHello — lists relays this peer will forward from.
|
||||||
|
FederationTrustChain {
|
||||||
|
/// TLS fingerprints of relays whose media may be forwarded through us.
|
||||||
|
vouched_relays: Vec<String>,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Forwarding Authorization
|
||||||
|
|
||||||
|
In `handle_datagram`, before forwarding media to local participants:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// Check if we should accept this forwarded media
|
||||||
|
let is_authorized = if source_is_direct_peer {
|
||||||
|
true // Direct peer, always accepted
|
||||||
|
} else {
|
||||||
|
// Check if the forwarding peer has delegate=true
|
||||||
|
let forwarding_peer = fm.find_trusted_by_fingerprint(forwarding_peer_fp);
|
||||||
|
forwarding_peer.map(|t| t.delegate).unwrap_or(false)
|
||||||
|
};
|
||||||
|
|
||||||
|
if !is_authorized {
|
||||||
|
warn!("dropping forwarded media from unauthorized relay chain");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Relay 2 (Hub) Behavior
|
||||||
|
|
||||||
|
When Relay 2 receives `FederationTrustChain` queries from peers:
|
||||||
|
1. Collect all directly connected peer fingerprints
|
||||||
|
2. Send `FederationTrustChain { vouched_relays }` to each peer
|
||||||
|
3. When a new relay connects, update all peers' trust chains
|
||||||
|
|
||||||
|
### Anti-Spam Properties
|
||||||
|
|
||||||
|
| Attack | Mitigation |
|
||||||
|
|--------|-----------|
|
||||||
|
| Unknown relay connects to hub | Hub rejects (not in `[[trusted]]`) |
|
||||||
|
| Hub forwards spam relay's media | Receiving relay checks delegate flag, drops if false |
|
||||||
|
| Relay spoofs origin fingerprint | Origin tag is set by the forwarding relay, not the source. The forwarding relay is trusted, so if it lies about origin, the trust is misplaced at the config level. |
|
||||||
|
| Chain amplification (A→B→C→D→...) | TTL on forwarded datagrams (decrement at each hop, drop at 0). Default TTL=2 (one intermediate relay). |
|
||||||
|
|
||||||
|
## TTL for Chain Length
|
||||||
|
|
||||||
|
Add a TTL byte to the federation datagram to limit chain depth:
|
||||||
|
|
||||||
|
```
|
||||||
|
[room_hash: 8 bytes][ttl: 1 byte][media_packet]
|
||||||
|
```
|
||||||
|
|
||||||
|
- Default TTL = 2 (allows one intermediate relay: A→B→C)
|
||||||
|
- Each forwarding relay decrements TTL
|
||||||
|
- When TTL = 0, don't forward further (only deliver to local participants)
|
||||||
|
- Configurable per-relay: `max_federation_hops = 2`
|
||||||
|
|
||||||
|
## Milestones
|
||||||
|
|
||||||
|
| Phase | Scope | Effort |
|
||||||
|
|-------|-------|--------|
|
||||||
|
| 1 | Add `delegate` field to `TrustedConfig` | 0.5 day |
|
||||||
|
| 2 | `FederationTrustChain` signal + announcement | 1 day |
|
||||||
|
| 3 | Authorization check in `handle_datagram` | 0.5 day |
|
||||||
|
| 4 | TTL in federation datagrams | 0.5 day |
|
||||||
|
| 5 | Testing: authorized vs unauthorized forwarding | 0.5 day |
|
||||||
|
|
||||||
|
## Non-Goals (v1)
|
||||||
|
|
||||||
|
- Per-room trust policies (trust Relay X only for room "android")
|
||||||
|
- Dynamic trust negotiation (relays negotiate trust level at runtime)
|
||||||
|
- Revocation (removing a relay from trust chain requires config edit + restart)
|
||||||
|
- Cryptographic proof of origin (signed datagrams from source relay)
|
||||||
22
docs/PRD-desktop-direct-calling.md
Normal file
22
docs/PRD-desktop-direct-calling.md
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
# PRD: Desktop Direct Calling — Backport SignalManager
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
The desktop Tauri app has the direct calling UI (Room/Direct Call toggle, Register, Call buttons) but the backend uses inline async code in `main.rs` instead of a proper `SignalManager`. This needs to be backported from the Android refactor.
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
|
||||||
|
1. **Create `signal_mgr.rs` for desktop** — same pattern as Android, or reuse the crate directly
|
||||||
|
2. **Wire into Tauri commands** — `register_signal` should use `SignalManager::connect()` + `run_recv_loop()` on a dedicated thread
|
||||||
|
3. **State polling** — `get_signal_status` should call `SignalManager::get_state_json()`
|
||||||
|
4. **place_call / answer_call** — delegate to SignalManager methods
|
||||||
|
5. **Merge android branch into desktop branch** — resolve the 37 desktop-only + 90 android-only commit divergence
|
||||||
|
6. **Test** — Android calls Desktop, Desktop calls Android
|
||||||
|
|
||||||
|
## UI Fixes
|
||||||
|
|
||||||
|
1. **Default alias** — generate random name on first start (like Android does)
|
||||||
|
2. **Default room** — change from "android" to "general"
|
||||||
|
3. **Fingerprint display** — ensure full `xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx` format (not truncated)
|
||||||
|
4. **Deregister button** — ability to disconnect signal channel
|
||||||
|
5. **Call state reset** — after hangup, return to "Registered" state, not stuck on "Ringing"
|
||||||
59
docs/PRD-mtu-discovery.md
Normal file
59
docs/PRD-mtu-discovery.md
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
# PRD: QUIC Path MTU Discovery
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
WarzonePhone uses conservative 1200-byte QUIC datagrams. Some network paths support larger MTUs (1400+), wasting bandwidth. Some broken paths (VPNs, tunnels, double-NAT, cellular) have MTU < 1200, causing silent packet drops — this may explain why Opus 64k fails on some paths while 24k works (larger encoded frames + FEC repair packets).
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Enable Quinn's built-in Path MTU Discovery (PMTUD) and handle edge cases:
|
||||||
|
1. PMTUD probes larger packet sizes and discovers the actual path MTU
|
||||||
|
2. Graceful fallback when datagrams exceed discovered MTU
|
||||||
|
3. Expose MTU in metrics for debugging
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Phase 1: Enable PMTUD in Quinn
|
||||||
|
|
||||||
|
`crates/wzp-transport/src/config.rs` — update `transport_config()`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// Enable PMTUD (Quinn default is enabled, but we should ensure it)
|
||||||
|
config.mtu_discovery_config(Some(quinn::MtuDiscoveryConfig::default()));
|
||||||
|
|
||||||
|
// Set minimum MTU for safety (some paths can't handle 1200)
|
||||||
|
// Quinn default min is 1200, which is the QUIC spec minimum
|
||||||
|
```
|
||||||
|
|
||||||
|
Quinn's `MtuDiscoveryConfig` has:
|
||||||
|
- `interval`: how often to probe (default: 600s)
|
||||||
|
- `upper_bound`: max MTU to probe (default: 1452 for IPv4)
|
||||||
|
- `minimum_change`: min MTU increase to be worth probing (default: 20)
|
||||||
|
|
||||||
|
### Phase 2: Handle MTU-related Failures
|
||||||
|
|
||||||
|
In federation forwarding (`send_raw_datagram`), if the datagram exceeds the connection's current MTU, Quinn returns an error. Handle gracefully:
|
||||||
|
- Log warning with packet size vs MTU
|
||||||
|
- Drop the packet (don't crash)
|
||||||
|
- Track in metrics: `wzp_relay_mtu_exceeded_total`
|
||||||
|
|
||||||
|
### Phase 3: Codec-Aware MTU
|
||||||
|
|
||||||
|
When the path MTU is small, the relay or client should:
|
||||||
|
- Prefer lower-bitrate codecs (smaller packets)
|
||||||
|
- Reduce FEC ratio (fewer repair packets)
|
||||||
|
- This feeds into the adaptive quality system
|
||||||
|
|
||||||
|
### Phase 4: Expose MTU in Stats
|
||||||
|
|
||||||
|
- Add `path_mtu` to relay metrics (per peer)
|
||||||
|
- Add `path_mtu` to client stats (visible in UI)
|
||||||
|
- Log MTU on connection establishment
|
||||||
|
|
||||||
|
## Non-Goals (v1)
|
||||||
|
|
||||||
|
- Datagram fragmentation (QUIC datagrams are atomic — either fit or don't)
|
||||||
|
- Manual MTU override per relay config
|
||||||
|
- MTU-based codec selection (future, needs adaptive quality)
|
||||||
|
|
||||||
|
## Effort: 1 day
|
||||||
146
docs/PRD-p2p-direct.md
Normal file
146
docs/PRD-p2p-direct.md
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
# PRD: Peer-to-Peer Direct Calls (No Relay)
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
All calls currently route through a relay, even 1-on-1 calls between clients that could reach each other directly. This adds latency (2x hop), creates a single point of failure, and requires trusting the relay operator (even though media is encrypted, the relay sees metadata).
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
For 1-on-1 calls, clients attempt a direct QUIC connection using STUN-discovered addresses. If NAT traversal succeeds, media flows directly between peers. If it fails, fall back to relay-assisted mode (current behavior).
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
Preferred (P2P):
|
||||||
|
Client A ←──QUIC direct──→ Client B
|
||||||
|
(no relay in media path, true E2E)
|
||||||
|
|
||||||
|
Fallback (Relay):
|
||||||
|
Client A ──→ Relay ──→ Client B
|
||||||
|
(current model)
|
||||||
|
|
||||||
|
Hybrid discovery:
|
||||||
|
Client A → Relay (signaling only) → Client B
|
||||||
|
↓ ↓
|
||||||
|
STUN server STUN server
|
||||||
|
↓ ↓
|
||||||
|
Discover public IP:port Discover public IP:port
|
||||||
|
↓ ↓
|
||||||
|
Exchange candidates via relay signaling
|
||||||
|
↓ ↓
|
||||||
|
Attempt direct QUIC connection ←──→
|
||||||
|
```
|
||||||
|
|
||||||
|
## Why P2P = True E2E
|
||||||
|
|
||||||
|
- QUIC TLS handshake establishes encrypted tunnel directly between A and B
|
||||||
|
- No third party sees the traffic
|
||||||
|
- Certificate pinning via identity fingerprints: each client derives their TLS cert from their Ed25519 seed (same as relay identity). During QUIC handshake, both sides verify the peer's cert fingerprint against the known identity
|
||||||
|
- MITM elimination: if A knows B's fingerprint (from prior call, QR code, or identity server), any interceptor presents a different cert → fingerprint mismatch → connection rejected
|
||||||
|
- Stronger guarantee than relay-assisted: user doesn't need to trust relay operator
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Phase 1: STUN Discovery
|
||||||
|
|
||||||
|
1. **STUN client**: lightweight UDP-based STUN client to discover public IP:port
|
||||||
|
- Use existing public STUN servers (stun.l.google.com:19302, etc.)
|
||||||
|
- Or run a STUN server alongside the relay
|
||||||
|
- Discover: local addresses, server-reflexive addresses (STUN), relay candidates (TURN/relay fallback)
|
||||||
|
|
||||||
|
2. **Candidate gathering**: on call initiation, gather all candidates:
|
||||||
|
- Host candidates: local network interfaces
|
||||||
|
- Server-reflexive: STUN-discovered public IP:port
|
||||||
|
- Relay candidate: the relay's address (fallback)
|
||||||
|
|
||||||
|
3. **Candidate exchange**: via relay signaling channel (existing `IceCandidate` signal message)
|
||||||
|
- A sends candidates to relay → relay forwards to B
|
||||||
|
- B sends candidates to relay → relay forwards to A
|
||||||
|
|
||||||
|
### Phase 2: Direct Connection
|
||||||
|
|
||||||
|
1. **QUIC hole punching**: both clients simultaneously attempt QUIC connections to each other's candidates
|
||||||
|
- Quinn supports connecting to multiple addresses
|
||||||
|
- First successful connection wins
|
||||||
|
- Timeout after 3 seconds, fall back to relay
|
||||||
|
|
||||||
|
2. **Identity verification**: during QUIC handshake, verify peer's TLS cert fingerprint
|
||||||
|
- `server_config_from_seed()` already exists — derive client cert from identity seed
|
||||||
|
- Both sides present certs (mutual TLS)
|
||||||
|
- Verify fingerprint matches expected identity
|
||||||
|
|
||||||
|
3. **Media flow**: once connected, use existing `QuinnTransport` for media + signals
|
||||||
|
- Same `send_media()` / `recv_media()` API
|
||||||
|
- Same codec pipeline, FEC, jitter buffer
|
||||||
|
- No code changes needed in the call engine
|
||||||
|
|
||||||
|
### Phase 3: Adaptive Quality (P2P)
|
||||||
|
|
||||||
|
P2P connections have direct quality visibility — no relay middleman:
|
||||||
|
|
||||||
|
1. Both clients observe RTT, loss, jitter directly from QUIC stats
|
||||||
|
2. Adapt codec quality based on direct observations
|
||||||
|
3. Since only 2 participants, coordinated switching is simple: propose → ack → switch
|
||||||
|
|
||||||
|
This is the simplest case for adaptive quality. Once proven, backport the logic to relay-assisted mode.
|
||||||
|
|
||||||
|
### Phase 4: Hybrid Mode
|
||||||
|
|
||||||
|
1. **Call initiation**: always connect to relay for signaling
|
||||||
|
2. **Parallel attempt**: while relay call is active, attempt P2P in background
|
||||||
|
3. **Seamless migration**: if P2P succeeds, migrate media path from relay to direct
|
||||||
|
- Both clients switch simultaneously
|
||||||
|
- Relay connection kept alive for signaling (presence, room updates)
|
||||||
|
4. **Fallback**: if P2P connection drops, seamlessly fall back to relay
|
||||||
|
|
||||||
|
## Security Properties
|
||||||
|
|
||||||
|
| Property | Relay Mode | P2P Mode |
|
||||||
|
|----------|-----------|----------|
|
||||||
|
| Encryption | ChaCha20-Poly1305 (app layer) | QUIC TLS 1.3 + ChaCha20-Poly1305 |
|
||||||
|
| Key exchange | Via relay signaling | Direct QUIC handshake |
|
||||||
|
| Identity verification | TOFU (server fingerprint) | Mutual TLS cert pinning |
|
||||||
|
| Metadata privacy | Relay sees who talks to whom | No third party sees anything |
|
||||||
|
| MITM resistance | Depends on relay trust | Strong (cert pinning) |
|
||||||
|
| Forward secrecy | ECDH ephemeral keys | QUIC built-in + app-layer rekey |
|
||||||
|
|
||||||
|
## Implementation Notes
|
||||||
|
|
||||||
|
### STUN in Rust
|
||||||
|
|
||||||
|
Use `stun-rs` or `webrtc-rs` crate for STUN client. Minimal: just need Binding Request/Response to discover server-reflexive address.
|
||||||
|
|
||||||
|
### Quinn Hole Punching
|
||||||
|
|
||||||
|
Quinn's `Endpoint` can both listen and connect. For hole punching:
|
||||||
|
```rust
|
||||||
|
let endpoint = create_endpoint(bind_addr, Some(server_config))?;
|
||||||
|
// Send connect to peer's address (opens NAT pinhole)
|
||||||
|
let conn = connect(&endpoint, peer_addr, "peer", client_config).await?;
|
||||||
|
// Simultaneously, peer connects to our address
|
||||||
|
// First successful handshake wins
|
||||||
|
```
|
||||||
|
|
||||||
|
### Client TLS Certificate
|
||||||
|
|
||||||
|
Already have `server_config_from_seed()` for relays. Create `client_config_from_seed()` that presents a TLS client certificate derived from the identity seed. The peer verifies this cert's fingerprint.
|
||||||
|
|
||||||
|
### Signaling via Relay
|
||||||
|
|
||||||
|
The existing relay connection carries `IceCandidate` signals. No new infrastructure needed — just use the relay as a dumb signaling pipe for candidate exchange.
|
||||||
|
|
||||||
|
## Non-Goals (v1)
|
||||||
|
|
||||||
|
- SFU over P2P (P2P is 1-on-1 only; multi-party uses relay SFU)
|
||||||
|
- TURN server (relay acts as the fallback, no separate TURN)
|
||||||
|
- mDNS local discovery (future)
|
||||||
|
- Mesh P2P for multi-party (future, complex)
|
||||||
|
|
||||||
|
## Milestones
|
||||||
|
|
||||||
|
| Phase | Scope | Effort |
|
||||||
|
|-------|-------|--------|
|
||||||
|
| 1 | STUN client + candidate gathering | 2 days |
|
||||||
|
| 2 | QUIC hole punching + identity verification | 3 days |
|
||||||
|
| 3 | Adaptive quality on P2P connection | 2 days |
|
||||||
|
| 4 | Hybrid mode (relay + P2P, seamless migration) | 3 days |
|
||||||
178
docs/PRD-protocol-analyzer.md
Normal file
178
docs/PRD-protocol-analyzer.md
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
# PRD: Protocol Analyzer & Debug Tap
|
||||||
|
|
||||||
|
## 1. Relay-Side Metadata Tap (`--debug-tap`)
|
||||||
|
|
||||||
|
### Problem
|
||||||
|
|
||||||
|
When debugging federation, codec issues, or packet flow problems, there's no visibility into what's actually flowing through the relay. You have to guess from client-side logs.
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
A `--debug-tap <room>` flag on the relay that logs every packet's **header metadata** for a specific room (or all rooms with `--debug-tap *`). No decryption needed — the MediaHeader is not encrypted, only the audio payload is.
|
||||||
|
|
||||||
|
### Output Format
|
||||||
|
|
||||||
|
```
|
||||||
|
[12:00:00.123] TAP room=test dir=in src=192.168.1.5:54321 seq=1234 codec=Opus24k ts=24000 fec_block=5 fec_sym=2 repair=false len=87
|
||||||
|
[12:00:00.123] TAP room=test dir=out dst=192.168.1.6:54322 seq=1234 codec=Opus24k ts=24000 fec_block=5 fec_sym=2 repair=false len=87 fan_out=2
|
||||||
|
[12:00:00.143] TAP room=test dir=in src=192.168.1.5:54321 seq=1235 codec=Opus24k ts=24960 fec_block=5 fec_sym=3 repair=false len=91
|
||||||
|
[12:00:00.500] TAP room=test dir=in src=192.168.1.6:54322 seq=0042 codec=Codec2_1200 ts=40000 fec_block=1 fec_sym=0 repair=false len=6
|
||||||
|
[12:00:01.000] TAP room=test SIGNAL type=RoomUpdate count=3 participants=[Alice,Bob,Charlie]
|
||||||
|
[12:00:05.000] TAP room=test STATS period=5s in_pkts=250 out_pkts=500 fan_out_avg=2.0 loss_detected=0 codecs_seen=[Opus24k,Codec2_1200]
|
||||||
|
```
|
||||||
|
|
||||||
|
### What it shows
|
||||||
|
|
||||||
|
- **Per-packet**: direction, source/dest, sequence number, codec ID, timestamp, FEC block/symbol, repair flag, payload size
|
||||||
|
- **Signals**: RoomUpdate, FederationRoomJoin/Leave, handshake events
|
||||||
|
- **Periodic stats**: packets in/out, average fan-out, codecs seen, detected sequence gaps (loss)
|
||||||
|
- **Federation**: room-hash tagged datagrams with source/dest relay
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
**File:** `crates/wzp-relay/src/room.rs` — in `run_participant_plain()` and `run_participant_trunked()`
|
||||||
|
|
||||||
|
After receiving a packet and before forwarding:
|
||||||
|
```rust
|
||||||
|
if debug_tap_enabled {
|
||||||
|
let h = &pkt.header;
|
||||||
|
info!(
|
||||||
|
room = %room_name,
|
||||||
|
dir = "in",
|
||||||
|
src = %addr,
|
||||||
|
seq = h.seq,
|
||||||
|
codec = ?h.codec_id,
|
||||||
|
ts = h.timestamp,
|
||||||
|
fec_block = h.fec_block,
|
||||||
|
fec_sym = h.fec_symbol,
|
||||||
|
repair = h.is_repair,
|
||||||
|
len = pkt.payload.len(),
|
||||||
|
"TAP"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Activation:** `--debug-tap <room_name>` CLI flag, or `debug_tap = "test"` / `debug_tap = "*"` in TOML config.
|
||||||
|
|
||||||
|
**Performance:** Only active when enabled. When enabled, adds one `info!()` log per packet per direction. At 50 fps × 5 participants = 500 log lines/sec — acceptable for debugging, not for production.
|
||||||
|
|
||||||
|
**Output options:**
|
||||||
|
- Default: tracing log (stderr)
|
||||||
|
- `--debug-tap-file <path>`: write to a dedicated file (JSONL format for machine parsing)
|
||||||
|
|
||||||
|
### Effort: 0.5 day
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Full Protocol Analyzer (Standalone Tool)
|
||||||
|
|
||||||
|
### Problem
|
||||||
|
|
||||||
|
The metadata tap shows packet flow but can't inspect audio content, verify encryption, or measure audio quality. For deep debugging (codec issues, resampling bugs, encryption mismatches), you need to see the actual decrypted audio.
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
A standalone `wzp-analyzer` binary that either:
|
||||||
|
- **A)** Acts as a transparent proxy between client and relay (MITM mode)
|
||||||
|
- **B)** Reads a pcap/capture file with QUIC session keys (passive mode)
|
||||||
|
- **C)** Runs as a special "observer" client that joins a room in listen-only mode with all participants' consent
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
**Option C (recommended — simplest, no MITM):**
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────┐
|
||||||
|
Client A ────────►│ Relay │◄──────── Client B
|
||||||
|
│ │
|
||||||
|
│ (SFU) │◄──────── wzp-analyzer
|
||||||
|
└──────────────┘ (observer mode)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌──────────────────┐
|
||||||
|
│ Decode + Analyze │
|
||||||
|
│ - Packet timing │
|
||||||
|
│ - Codec decode │
|
||||||
|
│ - Audio quality │
|
||||||
|
│ - Jitter stats │
|
||||||
|
│ - Waveform plot │
|
||||||
|
└──────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
The analyzer joins the room as a regular participant (receives all media via SFU forwarding) but doesn't send audio. It decodes everything it receives and produces analysis.
|
||||||
|
|
||||||
|
**Limitation:** End-to-end encrypted payloads can't be decoded without session keys. The analyzer would either:
|
||||||
|
1. Need the session key (shared out-of-band for debugging)
|
||||||
|
2. Or only analyze unencrypted headers + timing (same as the relay tap, but from client perspective with jitter buffer simulation)
|
||||||
|
|
||||||
|
For now, since encryption is not fully enforced in the current codebase (the crypto session is established but the actual ChaCha20 encryption of payloads is TODO in some paths), the analyzer can decode raw Opus/Codec2 payloads directly.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
**Real-time display (TUI):**
|
||||||
|
```
|
||||||
|
┌─ wzp-analyzer: room "podcast" on 193.180.213.68:4433 ─────────────┐
|
||||||
|
│ │
|
||||||
|
│ Participants: Alice (Opus24k), Bob (Codec2_3200) │
|
||||||
|
│ │
|
||||||
|
│ Alice ──────────────────────────────────────── │
|
||||||
|
│ seq: 5234 codec: Opus24k ts: 125760 loss: 0.2% jitter: 3ms │
|
||||||
|
│ RMS: 4521 peak: 15280 silence: no │
|
||||||
|
│ FEC blocks: 1046/1046 complete (0 recovered) │
|
||||||
|
│ ▁▂▃▅▇█▇▅▃▂▁▁▂▃▅▇█▇▅▃▂▁ (waveform last 1s) │
|
||||||
|
│ │
|
||||||
|
│ Bob ────────────────────────────────────── │
|
||||||
|
│ seq: 2617 codec: Codec2_3200 ts: 62800 loss: 1.5% jitter: 8ms│
|
||||||
|
│ RMS: 1250 peak: 6800 silence: no │
|
||||||
|
│ FEC blocks: 523/525 complete (4 recovered) │
|
||||||
|
│ ▁▁▂▃▅▇▅▃▂▁▁▁▂▃▅▇▅▃▂▁▁ (waveform last 1s) │
|
||||||
|
│ │
|
||||||
|
│ Total: 7851 pkts recv, 0 pkts sent, 2 participants │
|
||||||
|
│ Uptime: 2m 35s │
|
||||||
|
└──────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Recorded analysis:**
|
||||||
|
- Save all received packets to a capture file
|
||||||
|
- Post-session report: per-participant stats, quality timeline, codec switches, packet loss patterns
|
||||||
|
- Export decoded audio as WAV per participant (if decryptable)
|
||||||
|
|
||||||
|
**Quality metrics per participant:**
|
||||||
|
- Packet loss % (from sequence gaps)
|
||||||
|
- Jitter (inter-arrival time variance)
|
||||||
|
- Codec switches (timestamps + reasons)
|
||||||
|
- RMS audio level over time
|
||||||
|
- Silence detection
|
||||||
|
- FEC recovery rate
|
||||||
|
- Round-trip estimates (from Ping/Pong if available)
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
**Binary:** `wzp-analyzer` (new crate or subcommand of `wzp-client`)
|
||||||
|
|
||||||
|
```
|
||||||
|
wzp-analyzer 193.180.213.68:4433 --room podcast
|
||||||
|
wzp-analyzer 193.180.213.68:4433 --room podcast --record capture.wzp
|
||||||
|
wzp-analyzer --replay capture.wzp --report report.html
|
||||||
|
```
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
- Existing: `wzp-transport`, `wzp-proto`, `wzp-codec`, `wzp-crypto`
|
||||||
|
- New: `ratatui` for TUI display (optional)
|
||||||
|
|
||||||
|
### Phases
|
||||||
|
|
||||||
|
| Phase | Scope | Effort |
|
||||||
|
|-------|-------|--------|
|
||||||
|
| 1 | Header-only analysis: join room, log packet metadata, show per-participant stats (TUI) | 2 days |
|
||||||
|
| 2 | Audio decode: decode Opus/Codec2 payloads (unencrypted path), show waveform + RMS | 1-2 days |
|
||||||
|
| 3 | Capture/replay: save packets to file, replay offline with full analysis | 1 day |
|
||||||
|
| 4 | HTML report: post-session quality report with charts | 2 days |
|
||||||
|
| 5 | Encrypted payload support: accept session keys, decrypt ChaCha20 | 1 day |
|
||||||
|
|
||||||
|
### Non-Goals (v1)
|
||||||
|
|
||||||
|
- Active probing (sending test patterns)
|
||||||
|
- Modifying packets in transit
|
||||||
|
- Automated quality scoring (MOS estimation)
|
||||||
|
- Video support
|
||||||
170
docs/PRD-relay-federation.md
Normal file
170
docs/PRD-relay-federation.md
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
# PRD: Relay Federation (Multi-Relay Mesh)
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Currently all participants in a call must connect to the same relay. This creates:
|
||||||
|
- **Single point of failure** — if the relay goes down, the entire call drops
|
||||||
|
- **Geographic latency** — users far from the relay get high RTT
|
||||||
|
- **Capacity limits** — one relay handles all traffic
|
||||||
|
|
||||||
|
Users should be able to connect to their nearest/preferred relay and still talk to users on other relays, as long as the relays are federated.
|
||||||
|
|
||||||
|
## Prerequisite: Fix Relay Identity Persistence
|
||||||
|
|
||||||
|
### Bug: TLS certificate regenerates on every restart
|
||||||
|
|
||||||
|
**Root cause:** `wzp-transport/src/config.rs:17` calls `rcgen::generate_simple_self_signed()` which creates a new keypair every time. The relay's Ed25519 identity seed IS persisted to `~/.wzp/relay-identity`, but the TLS certificate is not derived from it.
|
||||||
|
|
||||||
|
**Impact:** Clients see a different server fingerprint after every relay restart, triggering the "Server Key Changed" warning. This also breaks federation since relays identify each other by certificate fingerprint.
|
||||||
|
|
||||||
|
**Fix:** Derive the TLS certificate from the persisted relay seed:
|
||||||
|
1. Add `server_config_from_seed(seed: &[u8; 32])` to `wzp-transport`
|
||||||
|
2. Use the seed to create a deterministic keypair (e.g., derive an ECDSA key via HKDF from the Ed25519 seed)
|
||||||
|
3. Generate a self-signed cert with that keypair — same seed = same cert = same fingerprint
|
||||||
|
4. The relay passes its loaded seed to `server_config_from_seed()` instead of `server_config()`
|
||||||
|
|
||||||
|
**Effort:** 0.5 day
|
||||||
|
|
||||||
|
## Federation Design
|
||||||
|
|
||||||
|
### Core Concept
|
||||||
|
|
||||||
|
Two or more relays form a **federation mesh**. Each relay is an independent SFU. When relays are configured to trust each other, they bridge rooms with matching names — participants on relay A in room "podcast" hear participants on relay B in room "podcast" as if everyone were on the same relay.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Each relay reads a YAML config file (e.g., `~/.wzp/relay.yaml` or `--config relay.yaml`):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Relay identity (auto-generated if missing)
|
||||||
|
listen: 0.0.0.0:4433
|
||||||
|
|
||||||
|
# Federation peers — other relays we trust and bridge rooms with
|
||||||
|
# Both sides must configure each other for federation to work
|
||||||
|
peers:
|
||||||
|
- url: "193.180.213.68:4433"
|
||||||
|
fingerprint: "a5d6:e3c6:5ae7:185c:4eb1:af89:daed:4a43"
|
||||||
|
label: "Pangolin EU"
|
||||||
|
|
||||||
|
- url: "10.0.0.5:4433"
|
||||||
|
fingerprint: "7f2a:b391:0c44:..."
|
||||||
|
label: "Office LAN"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key rules:**
|
||||||
|
- Both relays must configure each other — **mutual trust** required
|
||||||
|
- A relay that receives a connection from an unknown peer logs: `"Relay a5d6:e3c6:... (193.180.213.68) wants to federate. To accept, add to peers config: url: 193.180.213.68:4433, fingerprint: a5d6:e3c6:..."`
|
||||||
|
- Fingerprints are verified via the TLS certificate (requires the identity fix above)
|
||||||
|
|
||||||
|
### Protocol
|
||||||
|
|
||||||
|
#### Peer Connection
|
||||||
|
|
||||||
|
1. On startup, each relay attempts QUIC connections to all configured peers
|
||||||
|
2. The connection uses SNI `"_federation"` (reserved room name prefix) to distinguish from client connections
|
||||||
|
3. After QUIC handshake, verify the peer's certificate fingerprint matches the configured fingerprint
|
||||||
|
4. If fingerprint mismatch → reject, log warning
|
||||||
|
5. If peer connects but isn't in our config → log the helpful "add to config" message, reject
|
||||||
|
|
||||||
|
#### Room Bridging
|
||||||
|
|
||||||
|
Once two relays are connected:
|
||||||
|
|
||||||
|
1. **Room discovery**: When a local participant joins room "T", the relay sends a `FederationRoomJoin { room: "T" }` signal to all connected peers
|
||||||
|
2. **Room leave**: When the last local participant leaves room "T", send `FederationRoomLeave { room: "T" }`
|
||||||
|
3. **Media forwarding**: For each room that exists on both relays:
|
||||||
|
- Relay A forwards all media packets from its local participants to relay B
|
||||||
|
- Relay B forwards all media packets from its local participants to relay A
|
||||||
|
- Each relay then fans out received federated media to its local participants (same as local SFU forwarding)
|
||||||
|
4. **Participant presence**: `RoomUpdate` signals are merged — local participants + federated participants from all peers
|
||||||
|
|
||||||
|
```
|
||||||
|
Relay A (2 local users) Relay B (1 local user)
|
||||||
|
┌─────────────────────┐ ┌─────────────────────┐
|
||||||
|
│ Room "T" │ │ Room "T" │
|
||||||
|
│ Alice (local) ────┼──media──►│ Charlie (local) │
|
||||||
|
│ Bob (local) ────┼──media──►│ │
|
||||||
|
│ │◄──media──┼── Charlie │
|
||||||
|
│ Charlie (federated)│ │ Alice (federated) │
|
||||||
|
│ │ │ Bob (federated) │
|
||||||
|
└─────────────────────┘ └─────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Signal Messages (new)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
enum FederationSignal {
|
||||||
|
/// A room exists on this relay with active participants
|
||||||
|
RoomJoin { room: String, participants: Vec<ParticipantInfo> },
|
||||||
|
/// Room is empty on this relay
|
||||||
|
RoomLeave { room: String },
|
||||||
|
/// Participant update for a federated room
|
||||||
|
ParticipantUpdate { room: String, participants: Vec<ParticipantInfo> },
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Media Forwarding
|
||||||
|
|
||||||
|
Federated media is forwarded as raw QUIC datagrams — the relay doesn't decode/re-encode. Each packet is prefixed with a room identifier so the receiving relay knows which room to fan it out to:
|
||||||
|
|
||||||
|
```
|
||||||
|
[room_hash: 8 bytes][original_media_packet]
|
||||||
|
```
|
||||||
|
|
||||||
|
The 8-byte room hash is computed once when the federation room bridge is established.
|
||||||
|
|
||||||
|
### What Relays DON'T Do
|
||||||
|
|
||||||
|
- **No transcoding** — media passes through as-is. If Alice sends Opus 64k, Charlie receives Opus 64k
|
||||||
|
- **No re-encryption** — packets are already encrypted end-to-end between participants. Relays just forward opaque bytes
|
||||||
|
- **No central coordinator** — each relay independently connects to its configured peers. No master/slave, no consensus protocol
|
||||||
|
- **No automatic peer discovery** — peers must be explicitly configured in YAML
|
||||||
|
|
||||||
|
### Failure Handling
|
||||||
|
|
||||||
|
- If a peer relay goes down, the federation link drops. Local rooms continue to work. Federated participants disappear from presence.
|
||||||
|
- Reconnection: attempt every 30 seconds with exponential backoff up to 5 minutes
|
||||||
|
- If a peer relay restarts with a new identity (bug not fixed), the fingerprint check fails and federation is rejected with a clear error log
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
### Phase 0: Fix Relay Identity (prerequisite)
|
||||||
|
- Derive TLS cert from persisted seed
|
||||||
|
- Same seed → same cert → same fingerprint across restarts
|
||||||
|
|
||||||
|
### Phase 1: YAML Config + Peer Connection
|
||||||
|
- Add `--config relay.yaml` CLI flag
|
||||||
|
- Parse peers config
|
||||||
|
- On startup, connect to all configured peers via QUIC
|
||||||
|
- Verify certificate fingerprints
|
||||||
|
- Log helpful message for unconfigured peers
|
||||||
|
- Reconnect on disconnect
|
||||||
|
|
||||||
|
### Phase 2: Room Bridging
|
||||||
|
- Track which rooms exist on each peer
|
||||||
|
- Forward media for shared rooms
|
||||||
|
- Merge participant presence across peers
|
||||||
|
- Handle room join/leave signals
|
||||||
|
|
||||||
|
### Phase 3: Resilience
|
||||||
|
- Graceful handling of peer disconnect/reconnect
|
||||||
|
- Don't duplicate packets if a participant is reachable via multiple paths
|
||||||
|
- Rate limiting on federation links (prevent amplification)
|
||||||
|
- Metrics: federated rooms, packets forwarded, peer latency
|
||||||
|
|
||||||
|
## Effort Estimates
|
||||||
|
|
||||||
|
| Phase | Scope | Effort |
|
||||||
|
|-------|-------|--------|
|
||||||
|
| 0 | Fix relay TLS identity from seed | 0.5 day |
|
||||||
|
| 1 | YAML config + peer QUIC connections | 2 days |
|
||||||
|
| 2 | Room bridging + media forwarding + presence merge | 3-4 days |
|
||||||
|
| 3 | Resilience + metrics | 2 days |
|
||||||
|
|
||||||
|
## Non-Goals (v1)
|
||||||
|
|
||||||
|
- Automatic peer discovery (mDNS, DHT, etc.)
|
||||||
|
- Cascading federation (relay A ↔ B ↔ C where A doesn't know C)
|
||||||
|
- Load balancing across relays
|
||||||
|
- Encryption between relays (QUIC provides transport encryption; e2e encryption between participants is orthogonal)
|
||||||
|
- Different rooms on different relays (all federated rooms are bridged by name)
|
||||||
508
docs/USER_GUIDE.md
Normal file
508
docs/USER_GUIDE.md
Normal file
@@ -0,0 +1,508 @@
|
|||||||
|
# WarzonePhone User Guide
|
||||||
|
|
||||||
|
This guide covers all WarzonePhone client applications: Desktop (Tauri), Android, CLI, and Web.
|
||||||
|
|
||||||
|
## Desktop Client (Tauri)
|
||||||
|
|
||||||
|
The desktop client is a Tauri application with a native Rust audio engine and a web-based UI. It runs on macOS, Windows, and Linux.
|
||||||
|
|
||||||
|
### Connect Screen
|
||||||
|
|
||||||
|
When you launch the desktop client, you see the connect screen with:
|
||||||
|
|
||||||
|
- **Relay selector** -- click the relay button to open the Manage Relays dialog. Shows relay name, address, connection status (verified/new/changed/offline), and RTT latency
|
||||||
|
- **Room** -- enter a room name. Clients in the same room hear each other. Room names are hashed before being sent to the relay for privacy
|
||||||
|
- **Alias** -- your display name shown to other participants
|
||||||
|
- **OS Echo Cancel** -- checkbox to enable macOS VoiceProcessingIO (Apple's FaceTime-grade AEC). Strongly recommended when using speakers
|
||||||
|
- **Connect button** -- connects to the selected relay and joins the room
|
||||||
|
- **Identity info** -- your identicon and fingerprint are shown at the bottom. Click to copy
|
||||||
|
|
||||||
|
Recent rooms are displayed below the form for quick reconnection. Click any recent room to select it and its associated relay.
|
||||||
|
|
||||||
|
### In-Call Screen
|
||||||
|
|
||||||
|
Once connected, the in-call screen shows:
|
||||||
|
|
||||||
|
- **Room name** and **call timer** at the top
|
||||||
|
- **Status indicator** -- green when connected, yellow when reconnecting
|
||||||
|
- **Audio level meter** -- real-time visualization of outgoing audio
|
||||||
|
- **Participant list** -- identicon, alias, and fingerprint for each participant. Your own entry is highlighted with a badge
|
||||||
|
- **Controls** -- Mic toggle, Hang Up, Speaker toggle
|
||||||
|
- **Stats bar** -- TX and RX frame rates
|
||||||
|
|
||||||
|
### Settings Panel
|
||||||
|
|
||||||
|
Open with the gear icon or **Cmd+,** (Ctrl+, on Windows/Linux). Contains:
|
||||||
|
|
||||||
|
#### Connection
|
||||||
|
|
||||||
|
- **Default Room** -- room name used on next connect
|
||||||
|
- **Alias** -- display name
|
||||||
|
|
||||||
|
#### Audio
|
||||||
|
|
||||||
|
- **Quality slider** -- 5 levels:
|
||||||
|
|
||||||
|
| Position | Profile | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| 0 | Auto | Adaptive quality based on network conditions |
|
||||||
|
| 1 | Opus 24k | Good conditions (28.8 kbps with FEC) |
|
||||||
|
| 2 | Opus 6k | Degraded conditions (9.0 kbps with FEC) |
|
||||||
|
| 3 | Codec2 3.2k | Poor conditions (4.8 kbps with FEC) |
|
||||||
|
| 4 | Codec2 1.2k | Catastrophic conditions (2.4 kbps with FEC) |
|
||||||
|
|
||||||
|
- **OS Echo Cancellation** -- macOS VoiceProcessingIO toggle
|
||||||
|
- **Automatic Gain Control** -- normalize mic volume
|
||||||
|
|
||||||
|
#### Identity
|
||||||
|
|
||||||
|
- **Fingerprint** -- your public identity fingerprint
|
||||||
|
- **Identity file** -- stored at `~/.wzp/identity`
|
||||||
|
|
||||||
|
#### Recent Rooms
|
||||||
|
|
||||||
|
- History of recently joined rooms with relay association
|
||||||
|
- Clear History button
|
||||||
|
|
||||||
|
### Manage Relays Dialog
|
||||||
|
|
||||||
|
Open by clicking the relay selector button on the connect screen:
|
||||||
|
|
||||||
|
- **Relay list** -- each entry shows name, address, identicon (from server fingerprint), lock status, and RTT
|
||||||
|
- **Select** -- click a relay to make it the default
|
||||||
|
- **Remove** -- click the X button to delete a relay
|
||||||
|
- **Add Relay** -- enter name and host:port to add a new relay
|
||||||
|
- **Ping** -- relays are automatically pinged when the dialog opens. RTT and server fingerprint are updated
|
||||||
|
|
||||||
|
### Key Change Warning Dialog
|
||||||
|
|
||||||
|
If a relay's TLS fingerprint has changed since your last connection, a warning dialog appears:
|
||||||
|
|
||||||
|
- Shows the previously known fingerprint and the new fingerprint
|
||||||
|
- **Accept New Key** -- trust the new fingerprint and proceed
|
||||||
|
- **Cancel** -- abort the connection
|
||||||
|
|
||||||
|
This is the TOFU (Trust on First Use) model. Fingerprint changes typically mean the relay was restarted with a new identity. However, they could also indicate a man-in-the-middle attack.
|
||||||
|
|
||||||
|
### Keyboard Shortcuts
|
||||||
|
|
||||||
|
| Shortcut | Action | Context |
|
||||||
|
|----------|--------|---------|
|
||||||
|
| **m** | Toggle microphone | In-call |
|
||||||
|
| **s** | Toggle speaker | In-call |
|
||||||
|
| **q** | Hang up | In-call |
|
||||||
|
| **Cmd+,** (Ctrl+,) | Open/close settings | Any |
|
||||||
|
| **Escape** | Close dialog/settings | Any |
|
||||||
|
| **Enter** | Connect | Connect screen (when room/alias field is focused) |
|
||||||
|
|
||||||
|
### Audio Engine
|
||||||
|
|
||||||
|
The desktop audio engine uses:
|
||||||
|
|
||||||
|
- **CPAL** for audio I/O (CoreAudio on macOS, WASAPI on Windows, ALSA on Linux)
|
||||||
|
- **VoiceProcessingIO** on macOS for OS-level echo cancellation (opt-in via checkbox)
|
||||||
|
- **Lock-free SPSC ring buffers** between audio threads and network threads
|
||||||
|
- **Direct playout** -- no jitter buffer on the client (the relay buffers instead)
|
||||||
|
- Audio callbacks deliver 512 f32 samples at 48 kHz on macOS (accumulated to 960-sample frames for codec)
|
||||||
|
|
||||||
|
#### Audio Quality Notes
|
||||||
|
|
||||||
|
- Always use **Release builds** for real-time audio. Debug builds are too slow for wzp-codec, nnnoiseless, audiopus, and raptorq
|
||||||
|
- VoiceProcessingIO is strongly recommended on macOS. Software AEC does not work well with the round-trip latency (~35-45ms)
|
||||||
|
- The quality slider only affects the **encode** side. Decoding always accepts all codecs
|
||||||
|
|
||||||
|
### Auto-Reconnect
|
||||||
|
|
||||||
|
If the connection drops, the client automatically attempts to reconnect with exponential backoff (1s, 2s, 4s, 8s, capped at 10s). After 5 failed attempts, the client returns to the connect screen. The status dot shows yellow during reconnection.
|
||||||
|
|
||||||
|
## Android Client
|
||||||
|
|
||||||
|
The Android client is built with Kotlin and Jetpack Compose, using JNI to call the Rust audio engine.
|
||||||
|
|
||||||
|
### Call Screen
|
||||||
|
|
||||||
|
The main call screen shows:
|
||||||
|
|
||||||
|
- **Server selector** -- tap to choose from configured servers
|
||||||
|
- **Room name** -- enter the room to join
|
||||||
|
- **Connect/Disconnect** button
|
||||||
|
- **Participant list** with identicons and aliases
|
||||||
|
- **Audio level visualization**
|
||||||
|
- **Mute/Unmute** button
|
||||||
|
|
||||||
|
### Settings Screen
|
||||||
|
|
||||||
|
The settings screen is organized into sections:
|
||||||
|
|
||||||
|
#### Identity
|
||||||
|
|
||||||
|
- **Display Name** -- your alias shown to other participants
|
||||||
|
- **Fingerprint** -- displayed with an identicon. Tap to copy
|
||||||
|
- **Copy Key** -- copy the 64-character hex seed to clipboard for backup
|
||||||
|
- **Restore Key** -- paste a previously backed-up hex seed to restore your identity
|
||||||
|
|
||||||
|
#### Audio Defaults
|
||||||
|
|
||||||
|
- **Voice Volume** -- playout gain slider (-20 dB to +20 dB)
|
||||||
|
- **Mic Gain** -- capture gain slider (-20 dB to +20 dB)
|
||||||
|
- **Echo Cancellation (AEC)** -- toggle Android's built-in AEC. Disable if audio sounds distorted
|
||||||
|
- **Quality slider** -- 8 levels from best to lowest:
|
||||||
|
|
||||||
|
| Position | Profile | Bitrate | Color |
|
||||||
|
|----------|---------|---------|-------|
|
||||||
|
| 0 | Studio 64k | 70.4 kbps | Green |
|
||||||
|
| 1 | Studio 48k | 52.8 kbps | Green |
|
||||||
|
| 2 | Studio 32k | 35.2 kbps | Green |
|
||||||
|
| 3 | Auto | Adaptive | Yellow-green |
|
||||||
|
| 4 | Opus 24k | 28.8 kbps | Yellow-green |
|
||||||
|
| 5 | Opus 6k | 9.0 kbps | Yellow |
|
||||||
|
| 6 | Codec2 3.2k | 4.8 kbps | Orange |
|
||||||
|
| 7 | Codec2 1.2k | 2.4 kbps | Red |
|
||||||
|
|
||||||
|
Note: "Decode always accepts all codecs" -- the quality setting only affects encoding.
|
||||||
|
|
||||||
|
#### Servers
|
||||||
|
|
||||||
|
- **Server chips** -- tap to select, X to remove (built-in servers cannot be removed)
|
||||||
|
- **Add Server** -- enter host, port (default 4433), and optional label
|
||||||
|
- **Force Ping** -- servers are pinged on dialog open to measure RTT
|
||||||
|
|
||||||
|
#### Network
|
||||||
|
|
||||||
|
- **Prefer IPv6** -- toggle to prefer IPv6 connections when available
|
||||||
|
|
||||||
|
#### Room
|
||||||
|
|
||||||
|
- **Default Room** -- the room name pre-filled on the call screen
|
||||||
|
|
||||||
|
### Identity Backup and Restore
|
||||||
|
|
||||||
|
Your identity is a 32-byte seed stored as a 64-character hex string. To back up:
|
||||||
|
|
||||||
|
1. Go to Settings > Identity
|
||||||
|
2. Tap **Copy Key**
|
||||||
|
3. Store the hex string securely
|
||||||
|
|
||||||
|
To restore on a new device:
|
||||||
|
|
||||||
|
1. Go to Settings > Identity
|
||||||
|
2. Tap **Restore Key**
|
||||||
|
3. Paste the 64-character hex string
|
||||||
|
4. Tap **Restore** (key is staged)
|
||||||
|
5. Tap **Save** to apply
|
||||||
|
|
||||||
|
The same seed produces the same fingerprint on any device or platform.
|
||||||
|
|
||||||
|
## CLI Client (wzp-client)
|
||||||
|
|
||||||
|
The CLI client is a command-line tool for testing, recording, and live audio.
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
wzp-client [options] [relay-addr]
|
||||||
|
```
|
||||||
|
|
||||||
|
Default relay address: `127.0.0.1:4433`
|
||||||
|
|
||||||
|
### Flags Reference
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--live` | Live mic/speaker mode. Requires `--features audio` at build time |
|
||||||
|
| `--send-tone <secs>` | Send a 440 Hz test tone for N seconds |
|
||||||
|
| `--send-file <file>` | Send a raw PCM file (48 kHz mono s16le) |
|
||||||
|
| `--record <file.raw>` | Record received audio to raw PCM file |
|
||||||
|
| `--echo-test <secs>` | Run automated echo quality test for N seconds. Produces a windowed analysis with loss%, SNR, correlation |
|
||||||
|
| `--drift-test <secs>` | Run automated clock-drift measurement for N seconds |
|
||||||
|
| `--sweep` | Run jitter buffer parameter sweep (local, no network). Tests different buffer configurations |
|
||||||
|
| `--seed <hex>` | Identity seed as 64 hex characters. Compatible with featherChat |
|
||||||
|
| `--mnemonic <words...>` | Identity seed as BIP39 mnemonic (24 words). All remaining non-flag words are consumed |
|
||||||
|
| `--room <name>` | Room name. Hashed before sending for privacy |
|
||||||
|
| `--token <token>` | featherChat bearer token for relay authentication |
|
||||||
|
| `--metrics-file <path>` | Write JSONL telemetry to file (1 line/sec) |
|
||||||
|
| `--help`, `-h` | Print help and exit |
|
||||||
|
|
||||||
|
### Common Usage Patterns
|
||||||
|
|
||||||
|
#### Connectivity Test (Silence)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Send 250 silence frames (5 seconds) and exit
|
||||||
|
wzp-client 127.0.0.1:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Live Audio Call
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Terminal 1
|
||||||
|
wzp-relay
|
||||||
|
|
||||||
|
# Terminal 2: Alice
|
||||||
|
wzp-client --live --room myroom 127.0.0.1:4433
|
||||||
|
|
||||||
|
# Terminal 3: Bob
|
||||||
|
wzp-client --live --room myroom 127.0.0.1:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
Both capture from mic and play received audio. Press Ctrl+C to stop.
|
||||||
|
|
||||||
|
#### Send Test Tone and Record
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Terminal 1
|
||||||
|
wzp-relay
|
||||||
|
|
||||||
|
# Terminal 2: Send 10 seconds of 440 Hz tone
|
||||||
|
wzp-client --send-tone 10 127.0.0.1:4433
|
||||||
|
|
||||||
|
# Terminal 3: Record what is received
|
||||||
|
wzp-client --record call.raw 127.0.0.1:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
Play the recording:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ffplay -f s16le -ar 48000 -ac 1 call.raw
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Send Audio File
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Convert to raw PCM first
|
||||||
|
ffmpeg -i song.mp3 -f s16le -ar 48000 -ac 1 song.raw
|
||||||
|
|
||||||
|
# Send through relay
|
||||||
|
wzp-client --send-file song.raw 127.0.0.1:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Echo Quality Test
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wzp-relay &
|
||||||
|
wzp-client --echo-test 30 127.0.0.1:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
Produces a windowed analysis showing loss percentage, SNR, correlation, and quality degradation trends.
|
||||||
|
|
||||||
|
#### Clock Drift Test
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wzp-relay &
|
||||||
|
wzp-client --drift-test 60 127.0.0.1:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
Measures clock drift between the send and receive paths over the specified duration.
|
||||||
|
|
||||||
|
#### Jitter Buffer Sweep
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Runs locally, no network needed
|
||||||
|
wzp-client --sweep
|
||||||
|
```
|
||||||
|
|
||||||
|
Tests different jitter buffer configurations and prints results.
|
||||||
|
|
||||||
|
#### With Identity and Auth
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using hex seed
|
||||||
|
wzp-client --seed 0123456789abcdef...64chars --room secure-room --token my-bearer-token relay.example.com:4433
|
||||||
|
|
||||||
|
# Using BIP39 mnemonic
|
||||||
|
wzp-client --mnemonic abandon abandon abandon ... zoo --room secure-room relay.example.com:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
#### With JSONL Telemetry
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wzp-client --live --metrics-file /tmp/call.jsonl relay.example.com:4433
|
||||||
|
```
|
||||||
|
|
||||||
|
Writes one JSON object per second:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2026-04-07T12:00:00Z",
|
||||||
|
"buffer_depth": 45,
|
||||||
|
"underruns": 0,
|
||||||
|
"overruns": 0,
|
||||||
|
"loss_pct": 1.2,
|
||||||
|
"rtt_ms": 34,
|
||||||
|
"jitter_ms": 8,
|
||||||
|
"frames_sent": 50,
|
||||||
|
"frames_received": 49,
|
||||||
|
"quality_profile": "GOOD"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Audio File Format
|
||||||
|
|
||||||
|
All raw PCM files use:
|
||||||
|
|
||||||
|
| Property | Value |
|
||||||
|
|----------|-------|
|
||||||
|
| Sample rate | 48 kHz |
|
||||||
|
| Channels | 1 (mono) |
|
||||||
|
| Sample format | signed 16-bit little-endian (s16le) |
|
||||||
|
|
||||||
|
Conversion commands:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# WAV to raw PCM
|
||||||
|
ffmpeg -i input.wav -f s16le -ar 48000 -ac 1 output.raw
|
||||||
|
|
||||||
|
# MP3 to raw PCM
|
||||||
|
ffmpeg -i input.mp3 -f s16le -ar 48000 -ac 1 output.raw
|
||||||
|
|
||||||
|
# Raw PCM to WAV
|
||||||
|
ffmpeg -f s16le -ar 48000 -ac 1 -i input.raw output.wav
|
||||||
|
|
||||||
|
# Play raw PCM
|
||||||
|
ffplay -f s16le -ar 48000 -ac 1 file.raw
|
||||||
|
```
|
||||||
|
|
||||||
|
## Web Client (Browser)
|
||||||
|
|
||||||
|
The web client runs in a browser via the wzp-web bridge server.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start relay
|
||||||
|
wzp-relay
|
||||||
|
|
||||||
|
# Start web bridge
|
||||||
|
wzp-web --port 8080 --relay 127.0.0.1:4433
|
||||||
|
|
||||||
|
# For remote access (requires TLS for mic)
|
||||||
|
wzp-web --port 8443 --relay 127.0.0.1:4433 --tls
|
||||||
|
```
|
||||||
|
|
||||||
|
Open `http://localhost:8080/room-name` (or `https://...` with TLS).
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- **Open mic** (default) and **push-to-talk** modes
|
||||||
|
- PTT via on-screen button, mouse hold, or spacebar
|
||||||
|
- Audio level meter
|
||||||
|
- Auto-reconnection on disconnect
|
||||||
|
|
||||||
|
### Audio Processing
|
||||||
|
|
||||||
|
The web client uses AudioWorklet (preferred) with a ScriptProcessorNode fallback:
|
||||||
|
|
||||||
|
- **Capture**: Accumulates Float32 samples into 960-sample (20ms) Int16 frames
|
||||||
|
- **Playback**: Ring buffer capped at 200ms (9600 samples at 48 kHz)
|
||||||
|
|
||||||
|
## Identity System
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
Your identity is a 32-byte cryptographic seed that derives:
|
||||||
|
|
||||||
|
- **Ed25519 signing key** -- authenticates handshake messages
|
||||||
|
- **X25519 key agreement key** -- derives shared session encryption keys
|
||||||
|
- **Fingerprint** -- SHA-256 of the public key, truncated to 16 bytes, displayed as `xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx`
|
||||||
|
- **Identicon** -- deterministic visual avatar generated from the fingerprint
|
||||||
|
|
||||||
|
### Seed Sources
|
||||||
|
|
||||||
|
| Source | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| Auto-generated | Created on first run, stored in `~/.wzp/identity` (desktop/CLI) or app storage (Android) |
|
||||||
|
| `--seed <hex>` | 64-character hex string (CLI) |
|
||||||
|
| `--mnemonic <words>` | 24-word BIP39 mnemonic (CLI) |
|
||||||
|
| Copy Key / Restore Key | Hex backup/restore (Android settings) |
|
||||||
|
|
||||||
|
### BIP39 Mnemonic Backup
|
||||||
|
|
||||||
|
The 32-byte seed can be represented as a 24-word BIP39 mnemonic for human-readable backup. The same mnemonic produces the same identity on any platform or device.
|
||||||
|
|
||||||
|
### featherChat Compatibility
|
||||||
|
|
||||||
|
The identity derivation uses the same HKDF scheme as featherChat (Warzone messenger). The same seed produces the same fingerprint in both systems, allowing a unified identity across messaging and calling.
|
||||||
|
|
||||||
|
### Trust on First Use (TOFU)
|
||||||
|
|
||||||
|
Clients remember the fingerprints of relays and peers they connect to. On subsequent connections, if a fingerprint changes, the client warns the user. This protects against man-in-the-middle attacks but requires manual verification on first contact.
|
||||||
|
|
||||||
|
## Quality Profiles Explained
|
||||||
|
|
||||||
|
### When to Use Each Profile
|
||||||
|
|
||||||
|
| Profile | Total Bandwidth | Best For | Trade-offs |
|
||||||
|
|---------|----------------|----------|------------|
|
||||||
|
| **Studio 64k** | 70.4 kbps | LAN calls, music, podcasting | Highest quality, needs good network |
|
||||||
|
| **Studio 48k** | 52.8 kbps | Good WiFi, wired connections | Near-studio quality |
|
||||||
|
| **Studio 32k** | 35.2 kbps | Reliable WiFi, LTE | Very good quality with lower bandwidth |
|
||||||
|
| **Auto** | Adaptive | Most users | Automatically switches based on network conditions |
|
||||||
|
| **Opus 24k** | 28.8 kbps | General use, moderate networks | Good speech quality, reasonable bandwidth |
|
||||||
|
| **Opus 6k** | 9.0 kbps | 3G networks, congested WiFi | Intelligible speech, some artifacts |
|
||||||
|
| **Codec2 3.2k** | 4.8 kbps | Poor connections | Robotic but intelligible, narrowband |
|
||||||
|
| **Codec2 1.2k** | 2.4 kbps | Satellite links, extreme loss | Minimal intelligibility, last resort |
|
||||||
|
|
||||||
|
### Auto Mode
|
||||||
|
|
||||||
|
Auto mode starts at the **Good (Opus 24k)** profile and adapts based on observed network quality:
|
||||||
|
|
||||||
|
- **Downgrade** -- 3 consecutive bad quality reports (2 on cellular) trigger a step down
|
||||||
|
- **Upgrade** -- 10 consecutive good quality reports trigger a step up (one tier at a time)
|
||||||
|
- **Network handoff** -- switching from WiFi to cellular triggers a preemptive one-tier downgrade plus a 10-second FEC boost
|
||||||
|
|
||||||
|
Auto mode uses three tiers (Good, Degraded, Catastrophic). It does not use the Studio profiles, which must be selected manually.
|
||||||
|
|
||||||
|
### Manual Override
|
||||||
|
|
||||||
|
When you select a specific profile (not Auto), adaptive switching is disabled. The encoder stays at the selected profile regardless of network conditions. This is useful when you know your network quality and want consistent encoding, or when you want to force a specific bitrate.
|
||||||
|
|
||||||
|
Note: The decoder always accepts all codecs. A manual quality selection only affects what you send, not what you receive.
|
||||||
|
|
||||||
|
## Direct 1:1 Calling (Desktop + Android)
|
||||||
|
|
||||||
|
In addition to room-mode group calls, you can place direct calls to a specific peer by fingerprint. Direct calls bypass room state entirely — the relay is used purely as a signaling gateway and for media relay. There is no need for the callee to join a room beforehand; they just need to be registered with the same signal hub.
|
||||||
|
|
||||||
|
### UI elements in the direct-call panel
|
||||||
|
|
||||||
|
- **Place call field** — paste a fingerprint (the long hex string you see under your own identity) and click Call. The callee sees a ringing UI.
|
||||||
|
- **Recent contacts row** — a horizontal strip of chips showing your most recently called/receiving peers. Click a chip to re-dial. Aliases are shown if the peer has one, otherwise a short fingerprint prefix.
|
||||||
|
- **Call history list** — every direct call you've placed, received, or missed, with direction indicator (↗ Outgoing, ↙ Incoming, ✗ Missed), the peer's alias (if known) or fingerprint prefix, and a timestamp. Click an entry to re-dial.
|
||||||
|
- **Deregister button** — drops your signal-hub registration without quitting the app. Useful when switching identities (e.g. testing with two accounts on one machine) or when you want to explicitly appear offline to peers.
|
||||||
|
- **Clear history button** — wipes the call history store. Does not affect current calls.
|
||||||
|
|
||||||
|
### Live updates
|
||||||
|
|
||||||
|
The call history updates in real time across all views via Tauri events (`history-changed`). Placing, answering, or missing a call immediately refreshes the history list and the recent contacts row — no manual refresh needed.
|
||||||
|
|
||||||
|
### Default room
|
||||||
|
|
||||||
|
On first launch, the room name in the room-mode panel defaults to `general` (changed from the prior `android` default so the desktop and Android clients don't silently talk past each other). You can still change it to any room name, and the last-used room is remembered across launches.
|
||||||
|
|
||||||
|
### Random alias
|
||||||
|
|
||||||
|
New installations derive a human-friendly alias from your identity seed — something like `silent-forest-41` or `bold-river-07`. It's deterministic, so reinstalling without changing your seed gives you the same alias. The alias is shown alongside your fingerprint in the header and is what peers see in their call history when they receive your call.
|
||||||
|
|
||||||
|
You can override the alias in Settings → Identity if you want a specific name.
|
||||||
|
|
||||||
|
## Windows AEC Variants
|
||||||
|
|
||||||
|
The Windows desktop build ships in two variants for echo cancellation, depending on which backend you want to exercise. Both are `wzp-desktop.exe` binaries — only the internal audio backend differs.
|
||||||
|
|
||||||
|
| Build | File | Capture backend | AEC | When to use |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| **noAEC baseline** | `wzp-desktop-noAEC.exe` | CPAL (WASAPI shared mode) | None | Headphone-only use, or for A/B comparison against the AEC build |
|
||||||
|
| **Communications AEC** | `wzp-desktop.exe` | Direct WASAPI with `AudioCategory_Communications` | **Yes** — Windows routes the capture stream through the driver's communications APO chain (AEC + noise suppression + automatic gain control) | Any speaker-mode call, laptop built-in speakers, anywhere echo is audible |
|
||||||
|
|
||||||
|
**Quality caveat**: the communications AEC operates at the OS level and its algorithm depends on the audio driver's installed APO chain. On modern consumer laptops with Intel Smart Sound, Dolby, recent Realtek, or Windows 11 Voice Clarity, the quality is excellent (effectively matching what Teams/Zoom deliver). On generic class-compliant USB microphones or older drivers, the communications APO may not be present at all — in that case the build behaves identically to the noAEC baseline.
|
||||||
|
|
||||||
|
If you hear echo on the AEC build, try these in order before escalating:
|
||||||
|
|
||||||
|
1. **Check which capture device is selected as "Default Device - Communications"** in Windows Sound Settings → Recording tab. Right-click any device to set it. The AEC build opens the device marked as `eCommunications`, not `eConsole`, so changing the default-communications device changes what we capture from.
|
||||||
|
2. **Verify the driver exposes a communications APO**. Sound Settings → Recording → your mic → Properties → Advanced → look for an "Enhancements" or "Signal Enhancements" tab. If it's absent, the driver has no APOs and the AEC build effectively has no AEC.
|
||||||
|
3. **Try the classic Voice Capture DSP build** when it ships (tracked as task #26). That uses Microsoft's bundled software AEC (`CLSID_CWMAudioAEC`) which works on every Windows machine regardless of driver.
|
||||||
|
|
||||||
|
### Installing the Windows builds
|
||||||
|
|
||||||
|
1. Windows 10: install the [WebView2 Runtime Evergreen Bootstrapper](https://developer.microsoft.com/en-us/microsoft-edge/webview2/) first. Windows 11 has it pre-installed.
|
||||||
|
2. Copy `wzp-desktop.exe` (or `wzp-desktop-noAEC.exe`) to any directory and double-click. No installer needed.
|
||||||
|
3. First launch creates the config + identity store at `%APPDATA%\com.wzp.phone\`.
|
||||||
394
docs/android/fix-audio-ring-desync.md
Normal file
394
docs/android/fix-audio-ring-desync.md
Normal file
@@ -0,0 +1,394 @@
|
|||||||
|
# Fix: AudioRing SPSC Buffer Cursor Desync
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
A critical bug causes 10-16 seconds of bidirectional audio silence mid-call (~25-30s in). Both participants go silent at the exact same moment. The QUIC transport, relay, Opus codec, and FEC are all healthy — the bug is in the lock-free ring buffer that transfers decoded PCM from the Rust recv task to the Kotlin AudioTrack playout thread.
|
||||||
|
|
||||||
|
**Root cause:** `AudioRing::write()` modifies `read_pos` from the producer thread during overflow handling (lines 68-72 of `audio_ring.rs`). This violates the SPSC invariant — only the consumer should own `read_pos`. When both threads write to `read_pos`, a race corrupts the cursor state, causing the reader to see an empty or stale buffer for 12-16 seconds.
|
||||||
|
|
||||||
|
**Full forensics:** `debug/INCIDENT-2026-04-06-playout-ring-desync.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Solution: Reader-Detects-Lap Architecture
|
||||||
|
|
||||||
|
The writer NEVER touches `read_pos`. On overflow, the writer simply overwrites old buffer data and advances `write_pos`. The reader detects it was lapped and self-corrects by snapping its own `read_pos` forward.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Rewrite `AudioRing`
|
||||||
|
|
||||||
|
**File:** `crates/wzp-android/src/audio_ring.rs`
|
||||||
|
|
||||||
|
Replace the entire implementation with:
|
||||||
|
|
||||||
|
**Constants:**
|
||||||
|
```rust
|
||||||
|
/// Ring buffer capacity — must be a power of 2 for bitmask indexing.
|
||||||
|
/// 16384 samples = 341.3ms at 48kHz mono. Provides 70% more headroom
|
||||||
|
/// than the previous 9600 (200ms) for surviving Android GC pauses.
|
||||||
|
const RING_CAPACITY: usize = 16384; // 2^14
|
||||||
|
const RING_MASK: usize = RING_CAPACITY - 1;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Struct:**
|
||||||
|
```rust
|
||||||
|
pub struct AudioRing {
|
||||||
|
buf: Box<[i16; RING_CAPACITY]>,
|
||||||
|
write_pos: AtomicUsize, // monotonically increasing, ONLY written by producer
|
||||||
|
read_pos: AtomicUsize, // monotonically increasing, ONLY written by consumer
|
||||||
|
overflow_count: AtomicU64, // incremented by reader when it detects a lap
|
||||||
|
underrun_count: AtomicU64, // incremented by reader when ring is empty
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**`write()` — producer. Does NOT touch `read_pos`:**
|
||||||
|
```rust
|
||||||
|
pub fn write(&self, samples: &[i16]) -> usize {
|
||||||
|
let count = samples.len().min(RING_CAPACITY);
|
||||||
|
let w = self.write_pos.load(Ordering::Relaxed);
|
||||||
|
|
||||||
|
for i in 0..count {
|
||||||
|
unsafe {
|
||||||
|
let ptr = self.buf.as_ptr() as *mut i16;
|
||||||
|
*ptr.add((w + i) & RING_MASK) = samples[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
self.write_pos.store(w.wrapping_add(count), Ordering::Release);
|
||||||
|
count
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**`read()` — consumer. Detects lap, self-corrects:**
|
||||||
|
```rust
|
||||||
|
pub fn read(&self, out: &mut [i16]) -> usize {
|
||||||
|
let w = self.write_pos.load(Ordering::Acquire);
|
||||||
|
let mut r = self.read_pos.load(Ordering::Relaxed);
|
||||||
|
|
||||||
|
let mut avail = w.wrapping_sub(r);
|
||||||
|
|
||||||
|
// Lap detection: writer has overwritten our unread data.
|
||||||
|
// Snap read_pos forward to oldest valid data in the buffer.
|
||||||
|
// Safe because we (the reader) are the sole owner of read_pos.
|
||||||
|
if avail > RING_CAPACITY {
|
||||||
|
r = w.wrapping_sub(RING_CAPACITY);
|
||||||
|
avail = RING_CAPACITY;
|
||||||
|
self.overflow_count.fetch_add(1, Ordering::Relaxed);
|
||||||
|
}
|
||||||
|
|
||||||
|
let count = out.len().min(avail);
|
||||||
|
if count == 0 {
|
||||||
|
if w == r {
|
||||||
|
self.underrun_count.fetch_add(1, Ordering::Relaxed);
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
for i in 0..count {
|
||||||
|
out[i] = unsafe { *self.buf.as_ptr().add((r + i) & RING_MASK) };
|
||||||
|
}
|
||||||
|
|
||||||
|
self.read_pos.store(r.wrapping_add(count), Ordering::Release);
|
||||||
|
count
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**`available()` — clamped for external callers:**
|
||||||
|
```rust
|
||||||
|
pub fn available(&self) -> usize {
|
||||||
|
let w = self.write_pos.load(Ordering::Acquire);
|
||||||
|
let r = self.read_pos.load(Ordering::Relaxed);
|
||||||
|
w.wrapping_sub(r).min(RING_CAPACITY)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**`free_space()` — keep for API compat:**
|
||||||
|
```rust
|
||||||
|
pub fn free_space(&self) -> usize {
|
||||||
|
RING_CAPACITY.saturating_sub(self.available())
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Diagnostic accessors:**
|
||||||
|
```rust
|
||||||
|
pub fn overflow_count(&self) -> u64 {
|
||||||
|
self.overflow_count.load(Ordering::Relaxed)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn underrun_count(&self) -> u64 {
|
||||||
|
self.underrun_count.load(Ordering::Relaxed)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Constructor:**
|
||||||
|
```rust
|
||||||
|
pub fn new() -> Self {
|
||||||
|
debug_assert!(RING_CAPACITY.is_power_of_two());
|
||||||
|
Self {
|
||||||
|
buf: Box::new([0i16; RING_CAPACITY]),
|
||||||
|
write_pos: AtomicUsize::new(0),
|
||||||
|
read_pos: AtomicUsize::new(0),
|
||||||
|
overflow_count: AtomicU64::new(0),
|
||||||
|
underrun_count: AtomicU64::new(0),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Imports to add:** `use std::sync::atomic::AtomicU64;`
|
||||||
|
|
||||||
|
**Safety comment update:**
|
||||||
|
```rust
|
||||||
|
// SAFETY: AudioRing is SPSC — one thread writes (producer), one reads (consumer).
|
||||||
|
// The producer only writes write_pos. The consumer only writes read_pos.
|
||||||
|
// Neither thread writes the other's cursor. Buffer indices are derived from
|
||||||
|
// the owning thread's cursor, ensuring no concurrent access to the same index.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2: Add counter fields to `CallStats`
|
||||||
|
|
||||||
|
**File:** `crates/wzp-android/src/stats.rs`
|
||||||
|
|
||||||
|
Add three fields to the `CallStats` struct (after `fec_recovered`):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
/// Playout ring overflow count (reader was lapped by writer).
|
||||||
|
pub playout_overflows: u64,
|
||||||
|
/// Playout ring underrun count (reader found empty buffer).
|
||||||
|
pub playout_underruns: u64,
|
||||||
|
/// Capture ring overflow count.
|
||||||
|
pub capture_overflows: u64,
|
||||||
|
```
|
||||||
|
|
||||||
|
These derive `Default` (= 0) automatically via the existing `#[derive(Default)]`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3: Wire ring diagnostics into engine stats + logging
|
||||||
|
|
||||||
|
**File:** `crates/wzp-android/src/engine.rs`
|
||||||
|
|
||||||
|
**3a.** In `get_stats()` (~line 181), populate the new fields:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
stats.playout_overflows = self.state.playout_ring.overflow_count();
|
||||||
|
stats.playout_underruns = self.state.playout_ring.underrun_count();
|
||||||
|
stats.capture_overflows = self.state.capture_ring.overflow_count();
|
||||||
|
```
|
||||||
|
|
||||||
|
**3b.** In the recv task periodic stats log, add ring health:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
info!(
|
||||||
|
frames_decoded,
|
||||||
|
fec_recovered,
|
||||||
|
recv_errors,
|
||||||
|
max_recv_gap_ms,
|
||||||
|
playout_avail = state.playout_ring.available(),
|
||||||
|
playout_overflows = state.playout_ring.overflow_count(),
|
||||||
|
playout_underruns = state.playout_ring.underrun_count(),
|
||||||
|
"recv stats"
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**3c.** In the send task periodic stats log, add capture ring health:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
info!(
|
||||||
|
seq = s,
|
||||||
|
block_id,
|
||||||
|
frames_sent,
|
||||||
|
frames_dropped,
|
||||||
|
send_errors,
|
||||||
|
ring_avail = state.capture_ring.available(),
|
||||||
|
capture_overflows = state.capture_ring.overflow_count(),
|
||||||
|
"send stats"
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 4: Parse new stats in Kotlin
|
||||||
|
|
||||||
|
**File:** `android/app/src/main/java/com/wzp/engine/CallStats.kt`
|
||||||
|
|
||||||
|
Add fields to the data class:
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
val playoutOverflows: Long = 0,
|
||||||
|
val playoutUnderruns: Long = 0,
|
||||||
|
val captureOverflows: Long = 0,
|
||||||
|
```
|
||||||
|
|
||||||
|
Add parsing in `fromJson()`:
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
playoutOverflows = obj.optLong("playout_overflows", 0),
|
||||||
|
playoutUnderruns = obj.optLong("playout_underruns", 0),
|
||||||
|
captureOverflows = obj.optLong("capture_overflows", 0),
|
||||||
|
```
|
||||||
|
|
||||||
|
No UI changes needed — these fields will appear in debug report JSON automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 5: Unit tests
|
||||||
|
|
||||||
|
**File:** `crates/wzp-android/src/audio_ring.rs` — add `#[cfg(test)] mod tests`
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn capacity_is_power_of_two() {
|
||||||
|
assert!(RING_CAPACITY.is_power_of_two());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn basic_write_read() {
|
||||||
|
let ring = AudioRing::new();
|
||||||
|
let input: Vec<i16> = (0..960).map(|i| i as i16).collect();
|
||||||
|
ring.write(&input);
|
||||||
|
assert_eq!(ring.available(), 960);
|
||||||
|
|
||||||
|
let mut output = vec![0i16; 960];
|
||||||
|
let read = ring.read(&mut output);
|
||||||
|
assert_eq!(read, 960);
|
||||||
|
assert_eq!(output, input);
|
||||||
|
assert_eq!(ring.available(), 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn wraparound() {
|
||||||
|
let ring = AudioRing::new();
|
||||||
|
let frame = vec![42i16; 960];
|
||||||
|
// Write enough to wrap the buffer multiple times
|
||||||
|
for _ in 0..20 {
|
||||||
|
ring.write(&frame);
|
||||||
|
let mut out = vec![0i16; 960];
|
||||||
|
ring.read(&mut out);
|
||||||
|
assert!(out.iter().all(|&s| s == 42));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn overflow_detected_by_reader() {
|
||||||
|
let ring = AudioRing::new();
|
||||||
|
// Write more than RING_CAPACITY without reading
|
||||||
|
let big = vec![7i16; RING_CAPACITY + 960];
|
||||||
|
ring.write(&big[..RING_CAPACITY]);
|
||||||
|
ring.write(&big[RING_CAPACITY..]);
|
||||||
|
|
||||||
|
// Reader should detect lap
|
||||||
|
let mut out = vec![0i16; 960];
|
||||||
|
let read = ring.read(&mut out);
|
||||||
|
assert!(read > 0);
|
||||||
|
assert_eq!(ring.overflow_count(), 1);
|
||||||
|
// Data should be from the most recent writes
|
||||||
|
assert!(out.iter().all(|&s| s == 7));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn writer_never_modifies_read_pos() {
|
||||||
|
let ring = AudioRing::new();
|
||||||
|
// Read pos should stay at 0 until read() is called
|
||||||
|
let data = vec![1i16; RING_CAPACITY + 960];
|
||||||
|
ring.write(&data);
|
||||||
|
// read_pos is private, but we can check available() > CAPACITY
|
||||||
|
// which proves write() didn't advance read_pos
|
||||||
|
let w = ring.write_pos.load(std::sync::atomic::Ordering::Relaxed);
|
||||||
|
let r = ring.read_pos.load(std::sync::atomic::Ordering::Relaxed);
|
||||||
|
assert_eq!(r, 0, "write() must not modify read_pos");
|
||||||
|
assert!(w.wrapping_sub(r) > RING_CAPACITY);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn underrun_counted() {
|
||||||
|
let ring = AudioRing::new();
|
||||||
|
let mut out = vec![0i16; 960];
|
||||||
|
let read = ring.read(&mut out);
|
||||||
|
assert_eq!(read, 0);
|
||||||
|
assert_eq!(ring.underrun_count(), 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn overflow_recovery_reads_recent_data() {
|
||||||
|
let ring = AudioRing::new();
|
||||||
|
// Fill with old data
|
||||||
|
let old = vec![1i16; RING_CAPACITY];
|
||||||
|
ring.write(&old);
|
||||||
|
// Overwrite with new data (lapping the reader)
|
||||||
|
let new_data = vec![99i16; 960];
|
||||||
|
ring.write(&new_data);
|
||||||
|
|
||||||
|
// Reader should snap forward and get recent data
|
||||||
|
let mut out = vec![0i16; RING_CAPACITY];
|
||||||
|
let read = ring.read(&mut out);
|
||||||
|
assert_eq!(read, RING_CAPACITY);
|
||||||
|
// The last 960 samples should be 99
|
||||||
|
assert!(out[RING_CAPACITY - 960..].iter().all(|&s| s == 99));
|
||||||
|
assert_eq!(ring.overflow_count(), 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Memory Ordering Reference
|
||||||
|
|
||||||
|
| Operation | Ordering | Rationale |
|
||||||
|
|-----------|----------|-----------|
|
||||||
|
| `write_pos.store` in `write()` | Release | Buffer writes visible before cursor advances |
|
||||||
|
| `write_pos.load` in `read()` | Acquire | Pairs with Release above — sees all buffer writes |
|
||||||
|
| `write_pos.load` in `write()` | Relaxed | Writer is sole owner of write_pos |
|
||||||
|
| `read_pos.load` in `read()` | Relaxed | Reader is sole owner of read_pos |
|
||||||
|
| `read_pos.store` in `read()` | Release | Makes available() consistent from any thread |
|
||||||
|
| `read_pos.load` in `available()` | Relaxed | Informational only, slight staleness OK |
|
||||||
|
| All counters | Relaxed | Diagnostic only |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Capacity Tradeoff
|
||||||
|
|
||||||
|
| Capacity | Duration | Memory | Verdict |
|
||||||
|
|----------|----------|--------|---------|
|
||||||
|
| 8192 (2^13) | 170ms | 16KB | Less than current 200ms — risky |
|
||||||
|
| **16384 (2^14)** | **341ms** | **32KB** | **70% more headroom, bitmask indexing** |
|
||||||
|
| 32768 (2^15) | 682ms | 64KB | Excessive latency on overflow recovery |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
1. `cargo test -p wzp-android` — new unit tests pass
|
||||||
|
2. `cargo ndk -t arm64-v8a build --release -p wzp-android` — ARM cross-compile succeeds
|
||||||
|
3. Build APK, install on both test devices (Nothing A059 + Pixel 6)
|
||||||
|
4. 2+ minute call — verify no audio gaps
|
||||||
|
5. Check debug report JSON: `playout_overflows` should be 0 or very small
|
||||||
|
6. Check logcat `wzp_android` tag: send/recv stats show healthy ring state
|
||||||
|
7. Stress test: play music through one device speaker while on call — forces high ring throughput
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files to Modify
|
||||||
|
|
||||||
|
| File | What changes |
|
||||||
|
|------|-------------|
|
||||||
|
| `crates/wzp-android/src/audio_ring.rs` | Complete rewrite — the core fix |
|
||||||
|
| `crates/wzp-android/src/stats.rs` | Add 3 counter fields |
|
||||||
|
| `crates/wzp-android/src/engine.rs` | Wire counters into get_stats() + periodic logs |
|
||||||
|
| `android/app/src/main/java/com/wzp/engine/CallStats.kt` | Parse 3 new JSON fields |
|
||||||
|
|
||||||
|
## What Does NOT Change
|
||||||
|
|
||||||
|
- `AudioPipeline.kt` — calls `readAudio()`/`writeAudio()` unchanged; ring fix is transparent
|
||||||
|
- `jni_bridge.rs` — JNI bridge passes through unchanged
|
||||||
|
- `audio_android.rs` — separate Oboe-based ring, currently unused, different design
|
||||||
|
- Relay code — relay is confirmed healthy
|
||||||
|
- Desktop client — uses `Mutex + mpsc`, not `AudioRing`
|
||||||
149
docs/android/fix-capture-thread-crash.md
Normal file
149
docs/android/fix-capture-thread-crash.md
Normal file
@@ -0,0 +1,149 @@
|
|||||||
|
# Fix: Capture/Playout Thread Use-After-Free on Hangup
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
App crashes (SIGSEGV) when hanging up a call. The capture thread (`wzp-capture`) calls `engine.writeAudio()` via JNI after `teardown()` has freed the native engine handle. Same race exists for the playout thread's `readAudio()`.
|
||||||
|
|
||||||
|
**Root cause:** TOCTOU race between the `nativeHandle == 0L` check in `WzpEngine.writeAudio()`/`readAudio()` and `destroy()` freeing the native memory on the ViewModel thread. Audio threads can't be joined (libcrypto TLS destructor crash), so there's no synchronization between `stopAudio()` and `destroy()`.
|
||||||
|
|
||||||
|
**Full forensics:** `debug/INCIDENT-2026-04-06-capture-thread-use-after-free.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Solution: Destroy Latch
|
||||||
|
|
||||||
|
Add a `CountDownLatch(2)` that both audio threads count down after exiting their loops. `teardown()` awaits the latch (with timeout) before calling `destroy()`, guaranteeing no in-flight JNI calls.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Add a drain latch to `AudioPipeline`
|
||||||
|
|
||||||
|
**File:** `android/app/src/main/java/com/wzp/audio/AudioPipeline.kt`
|
||||||
|
|
||||||
|
Add a `CountDownLatch` field:
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
import java.util.concurrent.CountDownLatch
|
||||||
|
import java.util.concurrent.TimeUnit
|
||||||
|
|
||||||
|
class AudioPipeline(private val context: Context) {
|
||||||
|
// ... existing fields ...
|
||||||
|
|
||||||
|
/** Latch counted down by each audio thread after exiting its loop.
|
||||||
|
* stop() does NOT wait on this — teardown waits via awaitDrain(). */
|
||||||
|
private var drainLatch: CountDownLatch? = null
|
||||||
|
```
|
||||||
|
|
||||||
|
In `start()`, create the latch before spawning threads:
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
fun start(engine: WzpEngine) {
|
||||||
|
if (running) return
|
||||||
|
running = true
|
||||||
|
drainLatch = CountDownLatch(2) // one for capture, one for playout
|
||||||
|
|
||||||
|
captureThread = Thread({
|
||||||
|
runCapture(engine)
|
||||||
|
drainLatch?.countDown() // signal: capture loop exited
|
||||||
|
parkThread()
|
||||||
|
}, "wzp-capture").apply { ... }
|
||||||
|
|
||||||
|
playoutThread = Thread({
|
||||||
|
runPlayout(engine)
|
||||||
|
drainLatch?.countDown() // signal: playout loop exited
|
||||||
|
parkThread()
|
||||||
|
}, "wzp-playout").apply { ... }
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Add `awaitDrain()` — called by ViewModel before `destroy()`:
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
/** Block until both audio threads have exited their loops (max 200ms).
|
||||||
|
* After this returns, no more JNI calls to the engine will be made. */
|
||||||
|
fun awaitDrain(): Boolean {
|
||||||
|
return drainLatch?.await(200, TimeUnit.MILLISECONDS) ?: true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`stop()` remains unchanged (non-blocking, sets `running = false`).
|
||||||
|
|
||||||
|
### Step 2: Update `CallViewModel.teardown()` to await drain
|
||||||
|
|
||||||
|
**File:** `android/app/src/main/java/com/wzp/ui/call/CallViewModel.kt`
|
||||||
|
|
||||||
|
Change teardown to wait for audio threads before destroying:
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
private fun teardown(stopService: Boolean = true) {
|
||||||
|
Log.i(TAG, "teardown: stopping audio, stopService=$stopService")
|
||||||
|
val hadCall = audioStarted
|
||||||
|
CallService.onStopFromNotification = null
|
||||||
|
stopAudio() // sets running=false (non-blocking)
|
||||||
|
stopStatsPolling()
|
||||||
|
|
||||||
|
// Wait for audio threads to exit their loops before destroying the engine.
|
||||||
|
// This guarantees no in-flight JNI calls to writeAudio/readAudio.
|
||||||
|
val drained = audioPipeline?.awaitDrain() ?: true
|
||||||
|
if (!drained) {
|
||||||
|
Log.w(TAG, "teardown: audio threads did not drain in time")
|
||||||
|
}
|
||||||
|
audioPipeline = null
|
||||||
|
|
||||||
|
Log.i(TAG, "teardown: stopping engine")
|
||||||
|
try { engine?.stopCall() } catch (e: Exception) { Log.w(TAG, "stopCall err: $e") }
|
||||||
|
try { engine?.destroy() } catch (e: Exception) { Log.w(TAG, "destroy err: $e") }
|
||||||
|
engine = null
|
||||||
|
engineInitialized = false
|
||||||
|
// ... rest unchanged
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key change:** `awaitDrain()` is called AFTER `stopAudio()` (which sets `running=false`) but BEFORE `engine?.destroy()`. The latch guarantees both threads have exited their `while(running)` loops and will never call `writeAudio`/`readAudio` again.
|
||||||
|
|
||||||
|
Also move `audioPipeline = null` to after `awaitDrain()` to keep the reference alive for the latch call.
|
||||||
|
|
||||||
|
### Step 3: Move `stopAudio()` pipeline nulling
|
||||||
|
|
||||||
|
**File:** `android/app/src/main/java/com/wzp/ui/call/CallViewModel.kt`
|
||||||
|
|
||||||
|
In `stopAudio()`, do NOT null out the pipeline — let `teardown()` handle it after drain:
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
private fun stopAudio() {
|
||||||
|
if (!audioStarted) return
|
||||||
|
audioPipeline?.stop() // sets running=false
|
||||||
|
// DON'T null audioPipeline here — teardown() needs it for awaitDrain()
|
||||||
|
audioRouteManager?.unregister()
|
||||||
|
audioRouteManager?.setSpeaker(false)
|
||||||
|
_isSpeaker.value = false
|
||||||
|
audioStarted = false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files to Modify
|
||||||
|
|
||||||
|
| File | What changes |
|
||||||
|
|------|-------------|
|
||||||
|
| `android/.../audio/AudioPipeline.kt` | Add `CountDownLatch`, `countDown()` in threads, `awaitDrain()` method |
|
||||||
|
| `android/.../ui/call/CallViewModel.kt` | `teardown()` calls `awaitDrain()` before `destroy()`; `stopAudio()` doesn't null pipeline |
|
||||||
|
|
||||||
|
## What Does NOT Change
|
||||||
|
|
||||||
|
- `WzpEngine.kt` — the `nativeHandle == 0L` guard stays as defense-in-depth
|
||||||
|
- `jni_bridge.rs` — `panic::catch_unwind` stays as last resort
|
||||||
|
- `AudioPipeline.stop()` — remains non-blocking
|
||||||
|
- Thread parking — still needed to avoid libcrypto TLS crash
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
1. Build APK, install on test device
|
||||||
|
2. Make a call, hang up — verify no crash in logcat (`adb logcat -s AndroidRuntime:E DEBUG:F`)
|
||||||
|
3. Rapid call/hangup/call/hangup cycles — stress the teardown path
|
||||||
|
4. Check logcat for `teardown: audio threads did not drain in time` — should never appear under normal conditions
|
||||||
|
5. Verify debug report still works after hangup (latch doesn't interfere with report collection)
|
||||||
431
docs/incident-tauri-android-init-tcb.md
Normal file
431
docs/incident-tauri-android-init-tcb.md
Normal file
@@ -0,0 +1,431 @@
|
|||||||
|
# Incident report — Tauri Android `__init_tcb+4` SIGSEGV
|
||||||
|
|
||||||
|
**Status:** Blocked. Reproducible crash with a known trigger at the cc::Build /
|
||||||
|
rustc-link-lib layer that we cannot yet explain. Writing this report to hand
|
||||||
|
off for external help.
|
||||||
|
|
||||||
|
**Project:** WarzonePhone (Rust + Tauri 2.x Mobile) Android rewrite
|
||||||
|
**Branch:** `feat/desktop-audio-rewrite`
|
||||||
|
**Target phone:** Pixel 6 (`oriole`), Android 16 (`BP3A.250905.014`), arm64-v8a
|
||||||
|
**Date range of investigation:** 2026-04-09 (one working session, ~27 builds)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## One-paragraph summary
|
||||||
|
|
||||||
|
We're porting the existing CPAL-backed desktop Tauri app (`desktop/src-tauri`)
|
||||||
|
to Tauri Mobile Android so the same Rust + Tauri + WebView codebase runs on
|
||||||
|
both platforms. The Android `.apk` launches, renders the home screen, and
|
||||||
|
registers on a relay for signal-only builds (no audio backend). The moment
|
||||||
|
we add **any** `cc::Build::new().cpp(true).cpp_link_stdlib("c++_shared")`
|
||||||
|
call to `build.rs` — even with a 6-line cpp file that just returns 42 and is
|
||||||
|
never called from Rust — the built `.so` crashes at launch inside
|
||||||
|
`__init_tcb(bionic_tcb*, pthread_internal_t*)+4` via `pthread_create` via
|
||||||
|
`std::thread::spawn` via `tao::ndk_glue::create` via
|
||||||
|
`Java_com_wzp_desktop_WryActivity_create`, before our Rust entry point has
|
||||||
|
a chance to run. The exact same NDK, exact same Rust toolchain, exact same
|
||||||
|
Docker image is used by the legacy `wzp-android` crate (via `cargo-ndk`)
|
||||||
|
which compiles Oboe and runs fine on the same phone.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
**Docker build image:** `wzp-android-builder` (Dockerfile at
|
||||||
|
`scripts/Dockerfile.android-builder`)
|
||||||
|
|
||||||
|
- Base: `debian:bookworm`
|
||||||
|
- JDK 17
|
||||||
|
- Android SDK:
|
||||||
|
- cmdline-tools latest
|
||||||
|
- `platforms;android-34`, `platforms;android-36`
|
||||||
|
- `build-tools;34.0.0`, `build-tools;35.0.0`
|
||||||
|
- `ndk;26.1.10909125` (last stable before scudo/MTE crash on NDK r27+)
|
||||||
|
- `platform-tools`
|
||||||
|
- Node.js 20 LTS
|
||||||
|
- Rust stable `1.94.1 (e408947bf 2026-03-25)`
|
||||||
|
- Rust android targets: `aarch64-linux-android`, `armv7-linux-androideabi`,
|
||||||
|
`i686-linux-android`, `x86_64-linux-android`
|
||||||
|
- `cargo-ndk` + `cargo tauri-cli 2.10.1` (latest 2.x)
|
||||||
|
|
||||||
|
**Host:** Docker on `SepehrHomeserverdk` (remote build server).
|
||||||
|
|
||||||
|
**Phone:** Pixel 6, Android 16, kernel 6.1.134-android14-11, on the same LAN
|
||||||
|
as the build machine and a local `wzp-relay` binary.
|
||||||
|
|
||||||
|
**Tauri crate:** `desktop/src-tauri/` in the workspace at the root of the
|
||||||
|
repo. Depends on `tauri = "2"`, `tauri-plugin-shell = "2"`, `tokio`, `rustls`,
|
||||||
|
`wzp-proto`, `wzp-codec`, `wzp-fec`, `wzp-crypto`, `wzp-transport`, and (on
|
||||||
|
non-Android only) `wzp-client` with `features = ["audio", "vpio"]`. The
|
||||||
|
crate's `[lib]` section is:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[lib]
|
||||||
|
name = "wzp_desktop_lib"
|
||||||
|
crate-type = ["staticlib", "cdylib", "rlib"]
|
||||||
|
```
|
||||||
|
|
||||||
|
The crate produces `libwzp_desktop_lib.so` which is `System.loadLibrary`'d by
|
||||||
|
Tauri's generated `WryActivity.onCreate` via JNI.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The crash
|
||||||
|
|
||||||
|
Every failing build produces the same stack at launch, same pc offsets:
|
||||||
|
|
||||||
|
```
|
||||||
|
signal 11 (SIGSEGV), code 2 (SEGV_ACCERR), fault addr 0x00000072XXXXXX00f (write)
|
||||||
|
|
||||||
|
#00 pc 000000000130cc74 libwzp_desktop_lib.so (__init_tcb(bionic_tcb*, pthread_internal_t*)+4)
|
||||||
|
#01 pc 0000000001331cf0 libwzp_desktop_lib.so (pthread_create+360)
|
||||||
|
#02 pc 00000000012bee04 libwzp_desktop_lib.so (std::sys::thread::unix::Thread::new::h87be8e9feeaaaf84+184)
|
||||||
|
#03 pc 0000000000e37f5c libwzp_desktop_lib.so (std::thread::lifecycle::spawn_unchecked::h941f828f9a95150d+1504)
|
||||||
|
#04 pc 0000000000e461e8 libwzp_desktop_lib.so (std::thread::builder::Builder::spawn_unchecked::hec5f087680cb0248+112)
|
||||||
|
#05 pc 0000000000e441c8 libwzp_desktop_lib.so (std::thread::functions::spawn::ha3d3fbf2d9fe53e3+108)
|
||||||
|
#06 pc ... libwzp_desktop_lib.so (tao::platform_impl::platform::ndk_glue::create::h254c68662718841a+1792)
|
||||||
|
#07 pc ... libwzp_desktop_lib.so (Java_com_wzp_desktop_WryActivity_create+76)
|
||||||
|
```
|
||||||
|
|
||||||
|
The offsets are **byte-identical across every failing build**, even when the
|
||||||
|
cpp content changes drastically (cf. `cpp_smoke.cpp` at 6 lines, 20 lines,
|
||||||
|
200+ Oboe source files). We believe this is because cargo caches the Rust
|
||||||
|
compilation unit and only the build-script artifacts differ, and the final
|
||||||
|
link produces the same layout.
|
||||||
|
|
||||||
|
`__init_tcb` is defined locally inside our `.so` with C++ mangling:
|
||||||
|
|
||||||
|
```
|
||||||
|
_Z10__init_tcbP10bionic_tcbP18pthread_internal_t
|
||||||
|
```
|
||||||
|
|
||||||
|
It originates from bionic's `pthread_create.cpp`, which got pulled in
|
||||||
|
statically from the NDK's `sysroot/usr/lib/aarch64-linux-android/libc.a`.
|
||||||
|
Both failing and known-good (legacy `wzp_android.so`) builds contain this
|
||||||
|
same static symbol — the presence of the symbol is not the problem.
|
||||||
|
|
||||||
|
Fault address `0x72XXXXXX00f` with code `SEGV_ACCERR` (access permission
|
||||||
|
error, write). Aligned to `+4` inside `__init_tcb`, which is typically a
|
||||||
|
store into the passed-in `bionic_tcb*`. The pointer is either NULL-ish or
|
||||||
|
pointing into read-only memory.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Bisection (the important part)
|
||||||
|
|
||||||
|
We started from a known-good commit (`5309938`) where the Tauri Android app
|
||||||
|
launches, registers on a relay, and behaves identically to the desktop app
|
||||||
|
modulo audio. Then we added features **one variable at a time**:
|
||||||
|
|
||||||
|
| Step | Commit | Change vs previous | Result |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Baseline | `5309938` | — | ✅ launches, renders home, registers on relay |
|
||||||
|
| **A** | `f96d7ce` | Add `cc = "1"` build-dep + compile trivial `cpp/hello.c` via `cc::Build` (C, not C++). Static lib never linked in. | ✅ |
|
||||||
|
| **B** | `ae4f366` | Add `wzp-client` Android dep with `default-features = false` (no CPAL, no VPIO). No new imports. | ✅ |
|
||||||
|
| **C** | `19fd3dd` | Un-cfg-gate `mod engine;` in `lib.rs` so `engine.rs` compiles on Android. `CallEngine::start()` has an Android stub returning an error. | ✅ |
|
||||||
|
| **D** | `a852cad` | Compile `cpp/getauxval_fix.c` (legacy wzp-android shim). Still pure C. | ✅ |
|
||||||
|
| **E** | `4250f1b` | **Compile full Oboe C++ bridge** (200+ source files from `google/oboe@1.8.1`). `cc::Build::new().cpp(true).std("c++17").cpp_link_stdlib(Some("c++_shared"))` + `-llog` + `-lOpenSLES` link directives. Nothing called from Rust yet — the `extern "C"` bridge functions are exported but never referenced from the Rust side. | ❌ **crash** |
|
||||||
|
| E.4 | `aa240c6` | **Only change:** replace the entire Oboe compile with ONE tiny `cpp_smoke.cpp` file: `extern "C" int wzp_cpp_smoke(void) { std::lock_guard<std::mutex> lk(m); std::thread t([](){...}); t.join(); return g.load(); }`. Still `cpp(true) + cpp_link_stdlib("c++_shared")`. Drop `-llog`/`-lOpenSLES`. | ❌ **same crash, same offsets** |
|
||||||
|
| E.2 | `0224ce6` | Shrink `cpp_smoke.cpp` further: just `std::atomic<int>` + `fetch_add`, no mutex, no thread, no includes beyond `<atomic>`. | ❌ **same crash, same offsets** |
|
||||||
|
| E.1 | `0d74366` | **Absolute minimum:** `cpp_smoke.cpp` = `extern "C" int wzp_cpp_hello(void){return 42;}`. NO `#include`. NO STL. Just a function. Still compiled with `cpp(true) + cpp_link_stdlib("c++_shared")`. | ❌ **same crash, same offsets** |
|
||||||
|
|
||||||
|
### Additional confirming observations
|
||||||
|
|
||||||
|
1. **The cpp code is dead-stripped.** `llvm-nm -a libwzp_desktop_lib.so` shows
|
||||||
|
zero matches for `wzp_cpp_hello`, `wzp_cpp_smoke`, or any Oboe symbol in
|
||||||
|
builds E through E.1. The static archive (`libwzp_cpp_smoke.a` /
|
||||||
|
`liboboe_bridge.a`) exists on disk under
|
||||||
|
`target/aarch64-linux-android/debug/build/wzp-desktop-*/out/`, but because
|
||||||
|
nothing in Rust ever references the exported C function, the final linker
|
||||||
|
drops it.
|
||||||
|
|
||||||
|
2. **`build.rs` link directives are the real delta.** `cc::Build::new()
|
||||||
|
.cpp(true).cpp_link_stdlib(Some("c++_shared"))` emits a
|
||||||
|
`cargo:rustc-link-lib=c++_shared` directive that adds a `NEEDED` entry for
|
||||||
|
`libc++_shared.so` to the final `.so`'s dynamic table. `readelf -d` on
|
||||||
|
the crashing `.so` shows:
|
||||||
|
|
||||||
|
```
|
||||||
|
NEEDED Shared library: [libc++_shared.so]
|
||||||
|
NEEDED Shared library: [liblog.so] (only in full Oboe build)
|
||||||
|
NEEDED Shared library: [libOpenSLES.so] (only in full Oboe build)
|
||||||
|
```
|
||||||
|
|
||||||
|
The working baseline `.so` has no `NEEDED` entries beyond libc/liblog.
|
||||||
|
|
||||||
|
3. **Linker version doesn't matter.** We tried forcing
|
||||||
|
`aarch64-linux-android26-clang` as the linker (API 26 has proper dynamic
|
||||||
|
bindings to libc.so's runtime `pthread_create`/`__init_tcb`) via three
|
||||||
|
different mechanisms:
|
||||||
|
- `CARGO_TARGET_AARCH64_LINUX_ANDROID_LINKER` env var in `docker run`
|
||||||
|
- `.cargo/config.toml` workspace-level linker override
|
||||||
|
- **Binary replacement inside the image**: `mv
|
||||||
|
aarch64-linux-android24-clang .orig` and replace with a shell script
|
||||||
|
that `exec`s `aarch64-linux-android26-clang`. Verified by calling
|
||||||
|
`--version` which prints `Target: aarch64-unknown-linux-android26`.
|
||||||
|
|
||||||
|
All three made no difference. The `__init_tcb` symbol is pulled statically
|
||||||
|
from the **same** `libc.a` regardless of which clang wrapper is used — the
|
||||||
|
NDK ships ONE `libc.a` at
|
||||||
|
`sysroot/usr/lib/aarch64-linux-android/libc.a` shared across all API
|
||||||
|
levels. Only the per-API `libc.so` symlinks change (and we're linked
|
||||||
|
statically, not dynamically, against libc).
|
||||||
|
|
||||||
|
4. **Legacy `wzp-android` crate works on the same phone, same image.** Run
|
||||||
|
in the exact same Docker container, the legacy Kotlin app's JNI library
|
||||||
|
(`crates/wzp-android` built via `cargo ndk`) compiles a subset of the
|
||||||
|
same Oboe code, produces a `.so` that has the same static
|
||||||
|
`_Z10__init_tcbP...` + `pthread_create` + `pthread_create.cpp` symbols,
|
||||||
|
and launches cleanly on the Pixel 6. Key differences between the two
|
||||||
|
build paths:
|
||||||
|
|
||||||
|
| | `wzp-android` (works) | `wzp-desktop` Tauri (crashes) |
|
||||||
|
|---|---|---|
|
||||||
|
| Build driver | `cargo ndk -t arm64-v8a build --release -p wzp-android` | `cargo tauri android build --debug --target aarch64 --apk` |
|
||||||
|
| Profile | release | debug (release crashes identically) |
|
||||||
|
| Linker | `aarch64-linux-android26-clang` (via `.cargo/config.toml` which cargo-ndk honors) | `aarch64-linux-android24-clang` (tauri-cli hardcodes and ignores config; the shim redirect makes no difference) |
|
||||||
|
| crate-type | `["cdylib", "rlib"]` | `["staticlib", "cdylib", "rlib"]` |
|
||||||
|
| JNI entrypoint | direct Kotlin `System.loadLibrary` + our own `native fun` declarations; first `pthread_create` runs later from the tokio runtime inside a command | `WryActivity.onCreate` via Tauri's generated Java glue; first `pthread_create` runs **inside the JNI call** via `tao::ndk_glue::create` |
|
||||||
|
| Other heavy deps | tokio, wzp-{proto,codec,fec,crypto,transport} | tokio, tauri, tauri-runtime-wry, tao, wry, webview2-com, soup3, webkit2gtk (all platform-specific ones cfg-gated out of android), and also all of the above |
|
||||||
|
| Binary size | `libwzp_android.so` ≈ 14 MB (release) | `libwzp_desktop_lib.so` ≈ 160 MB (debug), 16 MB (release) |
|
||||||
|
|
||||||
|
5. **The crash happens in the JNI-callback thread during `onCreate`.** Frame
|
||||||
|
#06 `tao::platform_impl::platform::ndk_glue::create+1792` is tao's Android
|
||||||
|
event-loop bootstrap, which Tauri calls from inside
|
||||||
|
`Java_com_wzp_desktop_WryActivity_create` in response to the Java-side
|
||||||
|
activity lifecycle. This means the thread spawn is happening while the
|
||||||
|
Java VM still holds the native onCreate call, before `onCreate` has
|
||||||
|
returned to the Android runtime. Legacy `wzp-android` never spawns a
|
||||||
|
thread from an onCreate JNI call — it spawns threads only from
|
||||||
|
`nativeSignalConnect`/similar commands invoked later from Kotlin button
|
||||||
|
clicks, after the activity is fully initialised.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current suspect
|
||||||
|
|
||||||
|
One of the two items below, probably (2):
|
||||||
|
|
||||||
|
1. **The `.cpp(true)` mode in cc-rs changes something invisible in the link
|
||||||
|
pipeline** (for example, emitting a different `-x` flag to clang, or
|
||||||
|
changing linker driver selection). We have not yet verified this by
|
||||||
|
diffing the actual rustc linker invocation between a working and a
|
||||||
|
crashing build with `--verbose` + `-Clink-arg=-Wl,-t`.
|
||||||
|
|
||||||
|
2. **Adding `libc++_shared.so` as a NEEDED entry causes Android's dynamic
|
||||||
|
linker to load libc++_shared.so before our `.so`'s init runs, and
|
||||||
|
something in libc++_shared's `.init_array` interacts badly with
|
||||||
|
tao::ndk_glue's `pthread_create` call from inside the JNI onCreate
|
||||||
|
window**. The legacy crate doesn't hit this because (a) it has no
|
||||||
|
NEEDED libc++_shared when built without Oboe, and (b) even when it does
|
||||||
|
build Oboe, its thread spawns happen outside the onCreate JNI call so
|
||||||
|
whatever libc state is wrong at that moment is already stabilised.
|
||||||
|
|
||||||
|
We have not yet confirmed (2) with the obvious A/B test: keep `cpp_smoke.cpp`
|
||||||
|
but drop `.cpp_link_stdlib(Some("c++_shared"))` (and drop any manual
|
||||||
|
`cargo:rustc-link-lib=c++_shared`) so the NEEDED entry disappears but the
|
||||||
|
rest of the pipeline stays identical. That's the next experiment we were
|
||||||
|
going to run, but the user reasonably asked for this report first.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What we've ruled out
|
||||||
|
|
||||||
|
- **NDK API level** — forcing API-26 linker via three independent mechanisms
|
||||||
|
made zero difference.
|
||||||
|
- **Build profile** — release (`0x6b8000` offset, 21 MB unsigned APK) and
|
||||||
|
debug (same 193 MB APK, same crash offsets) both crash identically.
|
||||||
|
- **Oboe specifically** — replacing the Oboe compile with 6 lines of C++
|
||||||
|
that does nothing still reproduces the crash.
|
||||||
|
- **cpp code being executed at runtime** — dead-stripped, not in the final
|
||||||
|
`.so` at all per `nm -a`.
|
||||||
|
- **minSdk in build.gradle** — bumped from 24 to 26, no effect.
|
||||||
|
- **libdl.a stub issue** — ruled out via logcat (`libdl.a is a stub --- use
|
||||||
|
libdl.so instead` was only surfacing from our own `dlsym` shim that we
|
||||||
|
subsequently deleted).
|
||||||
|
- **`pthread_create` interposition via `-Wl,--wrap=pthread_create`** — tried
|
||||||
|
and reverted; the wrap target still resolved to the broken static stub.
|
||||||
|
- **Keystore / signing** — debug signing with persistent `~/.android/
|
||||||
|
debug.keystore` works fine; no signature mismatch issues.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The files involved
|
||||||
|
|
||||||
|
### `desktop/src-tauri/build.rs` (current state, E.1)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::process::Command;
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
// Embedded git hash
|
||||||
|
let git_hash = Command::new("git")
|
||||||
|
.args(["rev-parse", "--short", "HEAD"])
|
||||||
|
.output()
|
||||||
|
.ok()
|
||||||
|
.filter(|o| o.status.success())
|
||||||
|
.and_then(|o| String::from_utf8(o.stdout).ok())
|
||||||
|
.map(|s| s.trim().to_string())
|
||||||
|
.unwrap_or_else(|| "unknown".into());
|
||||||
|
println!("cargo:rustc-env=WZP_GIT_HASH={git_hash}");
|
||||||
|
println!("cargo:rerun-if-changed=../../.git/HEAD");
|
||||||
|
println!("cargo:rerun-if-changed=../../.git/refs/heads");
|
||||||
|
|
||||||
|
let target = std::env::var("TARGET").unwrap_or_default();
|
||||||
|
if target.contains("android") {
|
||||||
|
// Step A: plain C sanity file
|
||||||
|
println!("cargo:rerun-if-changed=cpp/hello.c");
|
||||||
|
cc::Build::new().file("cpp/hello.c").compile("wzp_hello");
|
||||||
|
|
||||||
|
// Step D: legacy getauxval shim
|
||||||
|
println!("cargo:rerun-if-changed=cpp/getauxval_fix.c");
|
||||||
|
cc::Build::new().file("cpp/getauxval_fix.c").compile("getauxval_fix");
|
||||||
|
|
||||||
|
// Step E.1: minimal C++ smoke — THIS STEP BRINGS BACK THE CRASH
|
||||||
|
println!("cargo:rerun-if-changed=cpp/cpp_smoke.cpp");
|
||||||
|
cc::Build::new()
|
||||||
|
.cpp(true)
|
||||||
|
.std("c++17")
|
||||||
|
.cpp_link_stdlib(Some("c++_shared"))
|
||||||
|
.file("cpp/cpp_smoke.cpp")
|
||||||
|
.compile("wzp_cpp_smoke");
|
||||||
|
|
||||||
|
// Copy libc++_shared.so into gen/android jniLibs so the runtime
|
||||||
|
// linker can find it when the NEEDED entry fires.
|
||||||
|
if let Ok(ndk) = std::env::var("ANDROID_NDK_HOME").or_else(|_| std::env::var("NDK_HOME")) {
|
||||||
|
let triple = "aarch64-linux-android";
|
||||||
|
let abi = "arm64-v8a";
|
||||||
|
let lib_dir = format!(
|
||||||
|
"{ndk}/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/{triple}"
|
||||||
|
);
|
||||||
|
println!("cargo:rustc-link-search=native={lib_dir}");
|
||||||
|
let shared_so = format!("{lib_dir}/libc++_shared.so");
|
||||||
|
if std::path::Path::new(&shared_so).exists() {
|
||||||
|
let manifest = std::env::var("CARGO_MANIFEST_DIR").unwrap_or_default();
|
||||||
|
let jni_dir = format!("{manifest}/gen/android/app/src/main/jniLibs/{abi}");
|
||||||
|
if std::fs::create_dir_all(&jni_dir).is_ok() {
|
||||||
|
let _ = std::fs::copy(&shared_so, format!("{jni_dir}/libc++_shared.so"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tauri_build::build()
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `desktop/src-tauri/cpp/cpp_smoke.cpp` (E.1)
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
extern "C" int wzp_cpp_hello(void) {
|
||||||
|
return 42;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `desktop/src-tauri/Cargo.toml` (relevant excerpts)
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[package]
|
||||||
|
name = "wzp-desktop"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2024"
|
||||||
|
|
||||||
|
[lib]
|
||||||
|
name = "wzp_desktop_lib"
|
||||||
|
crate-type = ["staticlib", "cdylib", "rlib"]
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "wzp-desktop"
|
||||||
|
path = "src/main.rs"
|
||||||
|
|
||||||
|
[build-dependencies]
|
||||||
|
tauri-build = { version = "2", features = [] }
|
||||||
|
cc = "1"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
tauri = { version = "2", features = [] }
|
||||||
|
tauri-plugin-shell = "2"
|
||||||
|
serde = { version = "1", features = ["derive"] }
|
||||||
|
serde_json = "1"
|
||||||
|
tokio = { version = "1", features = ["full"] }
|
||||||
|
tracing = "0.1"
|
||||||
|
tracing-subscriber = "0.3"
|
||||||
|
anyhow = "1"
|
||||||
|
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
|
||||||
|
|
||||||
|
wzp-proto = { path = "../../crates/wzp-proto" }
|
||||||
|
wzp-codec = { path = "../../crates/wzp-codec" }
|
||||||
|
wzp-fec = { path = "../../crates/wzp-fec" }
|
||||||
|
wzp-crypto = { path = "../../crates/wzp-crypto" }
|
||||||
|
wzp-transport = { path = "../../crates/wzp-transport" }
|
||||||
|
|
||||||
|
[target.'cfg(not(target_os = "android"))'.dependencies]
|
||||||
|
wzp-client = { path = "../../crates/wzp-client", features = ["audio", "vpio"] }
|
||||||
|
|
||||||
|
[target.'cfg(target_os = "android")'.dependencies]
|
||||||
|
wzp-client = { path = "../../crates/wzp-client", default-features = false }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reproduction
|
||||||
|
|
||||||
|
A fresh clone on a Linux x86_64 host with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git
|
||||||
|
cd wz-phone
|
||||||
|
git checkout feat/desktop-audio-rewrite
|
||||||
|
git reset --hard 0d74366 # <-- step E.1, smallest crashing commit
|
||||||
|
|
||||||
|
# Need: Android NDK r26.1.10909125, JDK 17, Node 20, Rust stable, cargo tauri 2.x
|
||||||
|
scripts/prep-linux-mint.sh # installs all the above into /opt/android-sdk etc.
|
||||||
|
|
||||||
|
cd desktop
|
||||||
|
npm install
|
||||||
|
cd src-tauri
|
||||||
|
cargo tauri android build --debug --target aarch64 --apk
|
||||||
|
adb install -r gen/android/app/build/outputs/apk/universal/debug/app-universal-debug.apk
|
||||||
|
adb logcat -c && adb shell am start -n com.wzp.desktop/.MainActivity
|
||||||
|
adb logcat | grep -E "F DEBUG|__init_tcb|pthread_create"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected result: SIGSEGV at `__init_tcb+4` within ~500 ms of launch.
|
||||||
|
|
||||||
|
Reverting `cpp/cpp_smoke.cpp` + the `cc::Build` call for it in `build.rs`
|
||||||
|
(one git command: `git revert 0d74366 aa240c6 0224ce6 a852cad`) restores a
|
||||||
|
working build. Keeping the C sanity compile (`hello.c`, `getauxval_fix.c`)
|
||||||
|
is fine — only the `.cpp(true) + .cpp_link_stdlib("c++_shared")` combination
|
||||||
|
triggers the regression.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What we'd like help with
|
||||||
|
|
||||||
|
1. **Is our suspect #2 actually the mechanism?** Is there a known issue
|
||||||
|
where a Tauri/tao android cdylib crashes on load when it has a
|
||||||
|
`libc++_shared.so` NEEDED entry and tries to spawn a thread from inside
|
||||||
|
an onCreate JNI call?
|
||||||
|
|
||||||
|
2. **What's the correct way to link Oboe (or any C++ Android audio
|
||||||
|
library) into a `cargo tauri android build` cdylib** without hitting
|
||||||
|
this? Is there a known-good combination of cc-rs flags / linker
|
||||||
|
arguments / cargo config?
|
||||||
|
|
||||||
|
3. **Is there a way to force `cargo tauri` to use the same linker setup
|
||||||
|
as `cargo ndk`**, which reliably produces working Oboe-linked .so
|
||||||
|
files from the exact same workspace? We've tried env var override,
|
||||||
|
`.cargo/config.toml`, and image-level binary replacement — cargo
|
||||||
|
tauri ignores all three and keeps using
|
||||||
|
`aarch64-linux-android24-clang`.
|
||||||
|
|
||||||
|
4. **Is there a way to defer `tao::ndk_glue::create`'s thread spawn to
|
||||||
|
after `onCreate` returns** so that whatever bionic state `__init_tcb`
|
||||||
|
depends on is ready?
|
||||||
|
|
||||||
|
5. **Lastly** — is there a fundamentally different approach we should
|
||||||
|
take (e.g., use the `oboe` Rust crate from crates.io instead of a
|
||||||
|
hand-rolled C++ bridge, use Android's AAudio directly via the `ndk`
|
||||||
|
crate's aaudio bindings, or even abandon the C++ audio path and
|
||||||
|
implement mic/speaker via JNI into Java `AudioRecord`/`AudioTrack`)?
|
||||||
130
scripts/Dockerfile.android-builder
Normal file
130
scripts/Dockerfile.android-builder
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# WZ Phone — Android build environment (Debian 12 / Bookworm)
|
||||||
|
#
|
||||||
|
# Supports both:
|
||||||
|
# 1. Legacy Kotlin+JNI Android app (via cargo-ndk + gradle)
|
||||||
|
# 2. Tauri 2.x Mobile Android app (via tauri-cli + Node/npm)
|
||||||
|
#
|
||||||
|
# Toolchain:
|
||||||
|
# - Debian 12 (cmake 3.25, no Android cross-compilation bugs)
|
||||||
|
# - JDK 17 (Gradle 8.5 + AGP 8.2.0 compatible)
|
||||||
|
# - NDK 26.1 (last stable before scudo/MTE crash on NDK 27+)
|
||||||
|
# - Node.js 20 LTS (for Tauri frontend build)
|
||||||
|
# - Rust stable with all 4 Android targets + cargo-ndk + tauri-cli 2.x
|
||||||
|
#
|
||||||
|
# Build: docker build -t wzp-android-builder -f Dockerfile.android-builder .
|
||||||
|
# =============================================================================
|
||||||
|
FROM debian:bookworm
|
||||||
|
|
||||||
|
ARG NDK_VERSION=26.1.10909125
|
||||||
|
ARG ANDROID_API=34
|
||||||
|
# Tauri 2.x mobile targets compileSdk 36 + build-tools 35 by default. Install
|
||||||
|
# both 34 (legacy Kotlin app) and 35/36 (Tauri mobile) so the same image works
|
||||||
|
# for both pipelines.
|
||||||
|
ARG ANDROID_API_TAURI=36
|
||||||
|
ARG BUILD_TOOLS_TAURI=35.0.0
|
||||||
|
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
ANDROID_HOME=/opt/android-sdk \
|
||||||
|
JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
|
||||||
|
|
||||||
|
ENV ANDROID_NDK_HOME=$ANDROID_HOME/ndk/$NDK_VERSION \
|
||||||
|
ANDROID_NDK=$ANDROID_HOME/ndk/$NDK_VERSION
|
||||||
|
|
||||||
|
# ── System packages ──────────────────────────────────────────────────────────
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
build-essential \
|
||||||
|
cmake \
|
||||||
|
curl \
|
||||||
|
git \
|
||||||
|
libssl-dev \
|
||||||
|
pkg-config \
|
||||||
|
unzip \
|
||||||
|
wget \
|
||||||
|
zip \
|
||||||
|
openjdk-17-jdk-headless \
|
||||||
|
ca-certificates \
|
||||||
|
libasound2-dev \
|
||||||
|
file \
|
||||||
|
xz-utils \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# ── Node.js 20 LTS (required by Tauri for frontend build) ────────────────────
|
||||||
|
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
|
||||||
|
&& apt-get install -y --no-install-recommends nodejs \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
&& node --version \
|
||||||
|
&& npm --version
|
||||||
|
|
||||||
|
# ── Android SDK + NDK 26.1 ──────────────────────────────────────────────────
|
||||||
|
RUN mkdir -p $ANDROID_HOME/cmdline-tools \
|
||||||
|
&& cd /tmp \
|
||||||
|
&& wget -q https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip -O cmdtools.zip \
|
||||||
|
&& unzip -qo cmdtools.zip -d $ANDROID_HOME/cmdline-tools \
|
||||||
|
&& mv $ANDROID_HOME/cmdline-tools/cmdline-tools $ANDROID_HOME/cmdline-tools/latest \
|
||||||
|
&& rm cmdtools.zip
|
||||||
|
|
||||||
|
RUN yes | $ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager --licenses > /dev/null 2>&1 \
|
||||||
|
&& $ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager --install \
|
||||||
|
"platforms;android-${ANDROID_API}" \
|
||||||
|
"build-tools;${ANDROID_API}.0.0" \
|
||||||
|
"platforms;android-${ANDROID_API_TAURI}" \
|
||||||
|
"build-tools;${BUILD_TOOLS_TAURI}" \
|
||||||
|
"ndk;${NDK_VERSION}" \
|
||||||
|
"platform-tools" \
|
||||||
|
2>&1 | grep -v '^\[' > /dev/null
|
||||||
|
|
||||||
|
# Work around the API-24 libc.a stub in the NDK. Any C++ static lib we
|
||||||
|
# link into libwzp_desktop_lib.so (e.g. the Oboe audio bridge) pulls in
|
||||||
|
# bionic's static pthread_create from API-24 libc.a via libc++_shared,
|
||||||
|
# and that pthread_create crashes at __init_tcb+4 when called from a
|
||||||
|
# .so loaded via dlopen (the static stub expects libc init state that
|
||||||
|
# only exists for main executables). API-26 has the proper runtime
|
||||||
|
# bindings. Tauri-cli hard-codes aarch64-linux-android24-clang as the
|
||||||
|
# linker and ignores .cargo/config.toml overrides, so the only sure
|
||||||
|
# fix is to replace the NDK's ${abi}24-clang binary itself with a
|
||||||
|
# shim that exec()s the ${abi}26-clang equivalent. Applies to all four
|
||||||
|
# ABIs × {clang, clang++}. The legacy wzp-android crate works without
|
||||||
|
# this because cargo-ndk honours a crate-level linker override; the
|
||||||
|
# shim is the minimal targeted fix for the cargo-tauri build path.
|
||||||
|
# Added as Option 3 for the incremental Step E regression (commit 4250f1b).
|
||||||
|
RUN set -eux; \
|
||||||
|
BIN=$ANDROID_NDK_HOME/toolchains/llvm/prebuilt/linux-x86_64/bin; \
|
||||||
|
for abi in aarch64-linux-android armv7a-linux-androideabi i686-linux-android x86_64-linux-android; do \
|
||||||
|
for suffix in clang clang++; do \
|
||||||
|
mv "$BIN/${abi}24-${suffix}" "$BIN/${abi}24-${suffix}.orig"; \
|
||||||
|
printf '#!/bin/sh\nexec "%s/%s26-%s" "$@"\n' "$BIN" "$abi" "$suffix" > "$BIN/${abi}24-${suffix}"; \
|
||||||
|
chmod +x "$BIN/${abi}24-${suffix}"; \
|
||||||
|
done; \
|
||||||
|
done
|
||||||
|
|
||||||
|
# Make SDK world-readable so builder user can access it
|
||||||
|
RUN chmod -R a+rX $ANDROID_HOME
|
||||||
|
|
||||||
|
# ── Builder user (1000:1000) ─────────────────────────────────────────────────
|
||||||
|
RUN groupadd -g 1000 builder \
|
||||||
|
&& useradd -m -u 1000 -g 1000 -s /bin/bash builder
|
||||||
|
|
||||||
|
USER builder
|
||||||
|
WORKDIR /home/builder
|
||||||
|
|
||||||
|
# ── Rust toolchain ───────────────────────────────────────────────────────────
|
||||||
|
# Install all 4 Android targets (Tauri Mobile builds for all ABIs by default;
|
||||||
|
# cargo-ndk legacy path only needs arm64-v8a — both workflows supported).
|
||||||
|
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs \
|
||||||
|
| sh -s -- -y --default-toolchain stable \
|
||||||
|
&& . $HOME/.cargo/env \
|
||||||
|
&& rustup target add \
|
||||||
|
aarch64-linux-android \
|
||||||
|
armv7-linux-androideabi \
|
||||||
|
i686-linux-android \
|
||||||
|
x86_64-linux-android \
|
||||||
|
&& cargo install cargo-ndk \
|
||||||
|
&& cargo install tauri-cli --version "^2.0" --locked
|
||||||
|
|
||||||
|
ENV PATH="/home/builder/.cargo/bin:$ANDROID_HOME/cmdline-tools/latest/bin:$ANDROID_HOME/platform-tools:$JAVA_HOME/bin:$PATH"
|
||||||
|
|
||||||
|
# NDK_HOME is the env var tauri-cli checks (in addition to ANDROID_NDK_HOME)
|
||||||
|
ENV NDK_HOME=$ANDROID_NDK_HOME
|
||||||
|
|
||||||
|
WORKDIR /build/source
|
||||||
164
scripts/build-and-notify.sh
Executable file
164
scripts/build-and-notify.sh
Executable file
@@ -0,0 +1,164 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Build Android APK via Docker on SepehrHomeserverdk, upload to rustypaste,
|
||||||
|
# notify via ntfy.sh/wzp. Fire and forget.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./scripts/build-and-notify.sh Build + upload + notify
|
||||||
|
# ./scripts/build-and-notify.sh --rust Force Rust rebuild
|
||||||
|
# ./scripts/build-and-notify.sh --pull Git pull before building
|
||||||
|
# ./scripts/build-and-notify.sh --install Also download + adb install locally
|
||||||
|
|
||||||
|
REMOTE_HOST="SepehrHomeserverdk"
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
LOCAL_OUTPUT="target/android-apk"
|
||||||
|
SSH_OPTS="-o ConnectTimeout=15 -o ServerAliveInterval=15 -o ServerAliveCountMax=4 -o LogLevel=ERROR"
|
||||||
|
|
||||||
|
REBUILD_RUST=0
|
||||||
|
DO_PULL=1
|
||||||
|
DO_INSTALL=0
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--rust) REBUILD_RUST=1 ;;
|
||||||
|
--pull) DO_PULL=1 ;;
|
||||||
|
--no-pull) DO_PULL=0 ;;
|
||||||
|
--install) DO_INSTALL=1 ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||||
|
|
||||||
|
ssh_cmd() { ssh -A $SSH_OPTS "$REMOTE_HOST" "$@"; }
|
||||||
|
|
||||||
|
# Upload the remote build script
|
||||||
|
log "Uploading build script to remote..."
|
||||||
|
ssh_cmd "cat > /tmp/wzp-docker-build.sh" <<'REMOTE_SCRIPT'
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
REBUILD_RUST="${1:-0}"
|
||||||
|
DO_PULL="${2:-0}"
|
||||||
|
|
||||||
|
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||||
|
|
||||||
|
trap 'notify "WZP Android build FAILED! Check /tmp/wzp-build.log"' ERR
|
||||||
|
|
||||||
|
# Pull if requested
|
||||||
|
if [ "$DO_PULL" = "1" ]; then
|
||||||
|
echo ">>> Pulling latest..."
|
||||||
|
cd "$BASE_DIR/data/source"
|
||||||
|
git reset --hard HEAD 2>/dev/null || true
|
||||||
|
git clean -fd 2>/dev/null || true
|
||||||
|
git gc --prune=now 2>/dev/null || true
|
||||||
|
git fetch origin feat/android-voip-client 2>&1 | tail -3
|
||||||
|
git reset --hard origin/feat/android-voip-client 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Clean Rust if requested
|
||||||
|
if [ "$REBUILD_RUST" = "1" ]; then
|
||||||
|
echo ">>> Cleaning Rust target..."
|
||||||
|
rm -rf "$BASE_DIR/data/cache/target/aarch64-linux-android/release"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Fix perms
|
||||||
|
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache" \
|
||||||
|
! -user 1000 -o ! -group 1000 2>/dev/null | \
|
||||||
|
xargs -r chown 1000:1000 2>/dev/null || true
|
||||||
|
|
||||||
|
# Clean jniLibs
|
||||||
|
rm -rf "$BASE_DIR/data/source/android/app/src/main/jniLibs/arm64-v8a"
|
||||||
|
|
||||||
|
GIT_HASH=$(cd $BASE_DIR/data/source && git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
||||||
|
notify "WZP Android build started [$GIT_HASH]..."
|
||||||
|
|
||||||
|
echo ">>> Building in Docker..."
|
||||||
|
docker run --rm --user 1000:1000 \
|
||||||
|
-v "$BASE_DIR/data/source:/build/source" \
|
||||||
|
-v "$BASE_DIR/data/cache/cargo-registry:/home/builder/.cargo/registry" \
|
||||||
|
-v "$BASE_DIR/data/cache/cargo-git:/home/builder/.cargo/git" \
|
||||||
|
-v "$BASE_DIR/data/cache/target:/build/source/target" \
|
||||||
|
-v "$BASE_DIR/data/cache/gradle:/home/builder/.gradle" \
|
||||||
|
wzp-android-builder bash -c '
|
||||||
|
set -euo pipefail
|
||||||
|
cd /build/source
|
||||||
|
|
||||||
|
echo ">>> Rust build..."
|
||||||
|
cargo ndk -t arm64-v8a -o android/app/src/main/jniLibs build --release -p wzp-android 2>&1 | tail -5
|
||||||
|
|
||||||
|
echo ">>> Checking .so files..."
|
||||||
|
# cargo-ndk may not copy libc++_shared.so — grab it from the NDK if missing
|
||||||
|
if [ ! -f android/app/src/main/jniLibs/arm64-v8a/libc++_shared.so ]; then
|
||||||
|
echo ">>> libc++_shared.so missing, copying from NDK..."
|
||||||
|
NDK_LIBCXX=$(find "$ANDROID_NDK_HOME" -name "libc++_shared.so" -path "*/aarch64-linux-android/*" | head -1)
|
||||||
|
if [ -n "$NDK_LIBCXX" ]; then
|
||||||
|
cp "$NDK_LIBCXX" android/app/src/main/jniLibs/arm64-v8a/
|
||||||
|
echo "Copied from: $NDK_LIBCXX"
|
||||||
|
else
|
||||||
|
echo "WARNING: libc++_shared.so not found in NDK, APK may crash at runtime"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
ls -lh android/app/src/main/jniLibs/arm64-v8a/
|
||||||
|
[ -f android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ] || { echo "ERROR: libwzp_android.so missing!"; exit 1; }
|
||||||
|
|
||||||
|
echo ">>> APK build..."
|
||||||
|
cd android && chmod +x gradlew
|
||||||
|
./gradlew clean assembleDebug --no-daemon --warning-mode=none 2>&1 | tail -50
|
||||||
|
echo "APK_BUILT"
|
||||||
|
'
|
||||||
|
|
||||||
|
# Upload to rustypaste
|
||||||
|
echo ">>> Uploading to rustypaste..."
|
||||||
|
source "$BASE_DIR/.env"
|
||||||
|
APK=$(find "$BASE_DIR/data/source/android" -name "app-debug*.apk" -path "*/outputs/apk/*" | head -1)
|
||||||
|
if [ -n "$APK" ]; then
|
||||||
|
URL=$(curl -s -F "file=@$APK" -H "Authorization: $rusty_auth_token" "$rusty_address")
|
||||||
|
echo "UPLOAD_URL=$URL"
|
||||||
|
notify "WZP Android [$GIT_HASH] done! APK: $URL"
|
||||||
|
echo ">>> Done! APK at: $URL"
|
||||||
|
else
|
||||||
|
notify "WZP build FAILED - no APK"
|
||||||
|
echo "ERROR: No APK found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
REMOTE_SCRIPT
|
||||||
|
|
||||||
|
ssh_cmd "chmod +x /tmp/wzp-docker-build.sh"
|
||||||
|
|
||||||
|
# Run in tmux
|
||||||
|
log "Starting build in tmux..."
|
||||||
|
ssh_cmd "tmux kill-session -t wzp-build 2>/dev/null; true"
|
||||||
|
ssh_cmd "tmux new-session -d -s wzp-build '/tmp/wzp-docker-build.sh $REBUILD_RUST $DO_PULL 2>&1 | tee /tmp/wzp-build.log'"
|
||||||
|
|
||||||
|
log "Build running! You'll get a notification on ntfy.sh/wzp with the download URL."
|
||||||
|
echo ""
|
||||||
|
echo " Monitor: ssh $REMOTE_HOST 'tail -f /tmp/wzp-build.log'"
|
||||||
|
echo " Status: ssh $REMOTE_HOST 'tail -5 /tmp/wzp-build.log'"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Optionally wait and install locally
|
||||||
|
if [ "$DO_INSTALL" = "1" ]; then
|
||||||
|
log "Waiting for build to finish..."
|
||||||
|
while true; do
|
||||||
|
sleep 15
|
||||||
|
if ssh_cmd "grep -q 'UPLOAD_URL\|ERROR' /tmp/wzp-build.log 2>/dev/null"; then
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
URL=$(ssh_cmd "grep UPLOAD_URL /tmp/wzp-build.log | tail -1 | cut -d= -f2")
|
||||||
|
if [ -n "$URL" ]; then
|
||||||
|
log "Downloading APK..."
|
||||||
|
mkdir -p "$LOCAL_OUTPUT"
|
||||||
|
curl -s -o "$LOCAL_OUTPUT/wzp-debug.apk" "$URL"
|
||||||
|
log "Installing..."
|
||||||
|
adb uninstall com.wzp.phone 2>/dev/null || true
|
||||||
|
adb install "$LOCAL_OUTPUT/wzp-debug.apk"
|
||||||
|
log "Done!"
|
||||||
|
else
|
||||||
|
err "Build failed"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
376
scripts/build-android-cloud.sh
Executable file
376
scripts/build-android-cloud.sh
Executable file
@@ -0,0 +1,376 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Build WarzonePhone Android APK using a temporary Hetzner Cloud VPS.
|
||||||
|
# Creates a VM, builds both debug and release APKs, downloads them, destroys the VM.
|
||||||
|
#
|
||||||
|
# Prerequisites: hcloud CLI authenticated, SSH key "wz" registered.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./scripts/build-android-cloud.sh Full build (create → build → download → destroy)
|
||||||
|
# ./scripts/build-android-cloud.sh --prepare Create VM and install deps only
|
||||||
|
# ./scripts/build-android-cloud.sh --build Build on existing VM
|
||||||
|
# ./scripts/build-android-cloud.sh --transfer Download APKs from VM
|
||||||
|
# ./scripts/build-android-cloud.sh --destroy Delete the VM
|
||||||
|
# ./scripts/build-android-cloud.sh --all prepare + build + transfer (VM persists)
|
||||||
|
# ./scripts/build-android-cloud.sh --upload Re-upload source to existing VM
|
||||||
|
#
|
||||||
|
# Environment variables (all optional):
|
||||||
|
# WZP_BRANCH Branch to build (default: feat/android-voip-client)
|
||||||
|
# WZP_SERVER_TYPE Hetzner server type (default: cx32 — 4 vCPU, 8GB RAM)
|
||||||
|
# WZP_KEEP_VM Set to 1 to skip destroy on full build
|
||||||
|
|
||||||
|
SSH_KEY_NAME="wz"
|
||||||
|
SSH_KEY_PATH="/Users/manwe/CascadeProjects/wzp"
|
||||||
|
SERVER_TYPE="${WZP_SERVER_TYPE:-cx33}"
|
||||||
|
IMAGE="ubuntu-24.04"
|
||||||
|
SERVER_NAME="wzp-android-builder"
|
||||||
|
REMOTE_USER="root"
|
||||||
|
OUTPUT_DIR="target/android-apk"
|
||||||
|
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
||||||
|
BRANCH="${WZP_BRANCH:-feat/android-voip-client}"
|
||||||
|
KEEP_VM="${WZP_KEEP_VM:-0}"
|
||||||
|
|
||||||
|
SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=10 -o LogLevel=ERROR"
|
||||||
|
|
||||||
|
# NDK 26.1 — NDK 27 crashes scudo on Android 16 MTE devices
|
||||||
|
NDK_VERSION="26.1.10909125"
|
||||||
|
ANDROID_API="34"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Helpers
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
log() { echo -e "\n\033[1;36m>>> $*\033[0m"; }
|
||||||
|
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||||
|
die() { err "$@"; do_destroy_quiet; exit 1; }
|
||||||
|
|
||||||
|
get_vm_ip() {
|
||||||
|
hcloud server list -o columns=name,ipv4 -o noheader 2>/dev/null | grep "$SERVER_NAME" | awk '{print $2}' | tr -d ' '
|
||||||
|
}
|
||||||
|
|
||||||
|
ssh_cmd() {
|
||||||
|
local ip
|
||||||
|
ip=$(get_vm_ip)
|
||||||
|
[ -n "$ip" ] || die "No VM found. Run --prepare first."
|
||||||
|
ssh $SSH_OPTS -A -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip" "$@"
|
||||||
|
}
|
||||||
|
|
||||||
|
scp_down() {
|
||||||
|
local ip
|
||||||
|
ip=$(get_vm_ip)
|
||||||
|
[ -n "$ip" ] || die "No VM found."
|
||||||
|
scp $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip:$1" "$2"
|
||||||
|
}
|
||||||
|
|
||||||
|
do_destroy_quiet() {
|
||||||
|
local name
|
||||||
|
name=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||||
|
if [ -n "$name" ]; then
|
||||||
|
echo ""
|
||||||
|
err "Cleaning up — destroying VM $name"
|
||||||
|
hcloud server delete "$name" 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --prepare: Create VM, install all build dependencies
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
do_prepare() {
|
||||||
|
# Check if VM already exists
|
||||||
|
local existing
|
||||||
|
existing=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||||
|
if [ -n "$existing" ]; then
|
||||||
|
log "VM already exists: $existing — reusing"
|
||||||
|
do_upload
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "Creating Hetzner VM ($SERVER_TYPE, $IMAGE)..."
|
||||||
|
hcloud server create \
|
||||||
|
--name "$SERVER_NAME" \
|
||||||
|
--type "$SERVER_TYPE" \
|
||||||
|
--image "$IMAGE" \
|
||||||
|
--ssh-key "$SSH_KEY_NAME" \
|
||||||
|
--location fsn1 \
|
||||||
|
--quiet \
|
||||||
|
|| die "Failed to create VM"
|
||||||
|
|
||||||
|
local ip
|
||||||
|
ip=$(get_vm_ip)
|
||||||
|
[ -n "$ip" ] || die "VM created but no IP found"
|
||||||
|
echo " VM: $SERVER_NAME @ $ip"
|
||||||
|
|
||||||
|
# Wait for SSH
|
||||||
|
log "Waiting for SSH..."
|
||||||
|
local ok=0
|
||||||
|
for i in $(seq 1 30); do
|
||||||
|
if ssh $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip" "echo ok" &>/dev/null; then
|
||||||
|
ok=1
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
sleep 2
|
||||||
|
done
|
||||||
|
[ "$ok" -eq 1 ] || die "SSH timeout after 60s"
|
||||||
|
|
||||||
|
# System packages
|
||||||
|
log "Installing system packages (cmake, JDK 17, build tools)..."
|
||||||
|
ssh_cmd "export DEBIAN_FRONTEND=noninteractive && \
|
||||||
|
apt-get update -qq && \
|
||||||
|
apt-get install -y -qq \
|
||||||
|
build-essential cmake curl git libssl-dev pkg-config \
|
||||||
|
unzip wget zip openjdk-17-jdk-headless \
|
||||||
|
> /dev/null 2>&1" \
|
||||||
|
|| die "Failed to install system packages"
|
||||||
|
|
||||||
|
# Verify cmake version (must be <= 3.30)
|
||||||
|
local cmake_ver
|
||||||
|
cmake_ver=$(ssh_cmd "cmake --version | head -1")
|
||||||
|
echo " cmake: $cmake_ver"
|
||||||
|
echo " java: $(ssh_cmd "java -version 2>&1 | head -1")"
|
||||||
|
|
||||||
|
# Rust
|
||||||
|
log "Installing Rust toolchain..."
|
||||||
|
ssh_cmd "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable > /dev/null 2>&1" \
|
||||||
|
|| die "Failed to install Rust"
|
||||||
|
ssh_cmd "source \$HOME/.cargo/env && rustup target add aarch64-linux-android > /dev/null 2>&1"
|
||||||
|
ssh_cmd "source \$HOME/.cargo/env && cargo install cargo-ndk > /dev/null 2>&1" \
|
||||||
|
|| die "Failed to install cargo-ndk"
|
||||||
|
echo " rust: $(ssh_cmd "source \$HOME/.cargo/env && rustc --version")"
|
||||||
|
|
||||||
|
# Android SDK + NDK
|
||||||
|
log "Installing Android SDK + NDK $NDK_VERSION..."
|
||||||
|
ssh_cmd "export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 && \
|
||||||
|
mkdir -p \$HOME/android-sdk/cmdline-tools && \
|
||||||
|
cd /tmp && \
|
||||||
|
wget -q https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip -O cmdtools.zip && \
|
||||||
|
unzip -qo cmdtools.zip -d \$HOME/android-sdk/cmdline-tools && \
|
||||||
|
mv \$HOME/android-sdk/cmdline-tools/cmdline-tools \$HOME/android-sdk/cmdline-tools/latest 2>/dev/null; \
|
||||||
|
yes | \$HOME/android-sdk/cmdline-tools/latest/bin/sdkmanager --licenses > /dev/null 2>&1; \
|
||||||
|
\$HOME/android-sdk/cmdline-tools/latest/bin/sdkmanager --install \
|
||||||
|
'platforms;android-${ANDROID_API}' \
|
||||||
|
'build-tools;${ANDROID_API}.0.0' \
|
||||||
|
'ndk;${NDK_VERSION}' \
|
||||||
|
'platform-tools' \
|
||||||
|
2>&1 | grep -v '^\[' > /dev/null" \
|
||||||
|
|| die "Failed to install Android SDK/NDK"
|
||||||
|
|
||||||
|
ssh_cmd "[ -d \$HOME/android-sdk/ndk/$NDK_VERSION ]" \
|
||||||
|
|| die "NDK not found after install"
|
||||||
|
echo " NDK: $NDK_VERSION"
|
||||||
|
|
||||||
|
# Upload source
|
||||||
|
do_upload
|
||||||
|
|
||||||
|
log "VM ready!"
|
||||||
|
echo " IP: $ip"
|
||||||
|
echo " SSH: ssh -A -i $SSH_KEY_PATH root@$ip"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --upload: Upload source code to VM
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
do_upload() {
|
||||||
|
log "Uploading source code (rsync)..."
|
||||||
|
local ip
|
||||||
|
ip=$(get_vm_ip)
|
||||||
|
[ -n "$ip" ] || die "No VM found."
|
||||||
|
rsync -az --delete \
|
||||||
|
--exclude='target' \
|
||||||
|
--exclude='.git' \
|
||||||
|
--exclude='.claude' \
|
||||||
|
--exclude='node_modules' \
|
||||||
|
--exclude='dist' \
|
||||||
|
--exclude='desktop/src-tauri/gen' \
|
||||||
|
-e "ssh $SSH_OPTS -i $SSH_KEY_PATH" \
|
||||||
|
"$PROJECT_DIR/" "$REMOTE_USER@$ip:/root/wzp-build/"
|
||||||
|
echo " Source uploaded."
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --build: Build native .so + debug & release APKs
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
do_build() {
|
||||||
|
log "Building Rust native library (arm64-v8a, release)..."
|
||||||
|
|
||||||
|
# Clean Rust release target to force full rebuild.
|
||||||
|
# cargo-ndk only copies libc++_shared.so when it actually links — a partial
|
||||||
|
# clean that skips relinking leaves libc++_shared.so missing from jniLibs.
|
||||||
|
ssh_cmd "rm -rf /root/wzp-build/target/aarch64-linux-android/release \
|
||||||
|
/root/wzp-build/android/app/src/main/jniLibs/arm64-v8a"
|
||||||
|
|
||||||
|
# ANDROID_NDK must be set (not just ANDROID_NDK_HOME) — cmake checks it
|
||||||
|
ssh_cmd "source \$HOME/.cargo/env && \
|
||||||
|
export ANDROID_HOME=\$HOME/android-sdk && \
|
||||||
|
export ANDROID_NDK_HOME=\$ANDROID_HOME/ndk/$NDK_VERSION && \
|
||||||
|
export ANDROID_NDK=\$ANDROID_NDK_HOME && \
|
||||||
|
cd /root/wzp-build && \
|
||||||
|
cargo ndk -t arm64-v8a \
|
||||||
|
-o android/app/src/main/jniLibs \
|
||||||
|
build --release -p wzp-android 2>&1" | tail -5 \
|
||||||
|
|| die "Rust native build failed"
|
||||||
|
|
||||||
|
ssh_cmd "[ -f /root/wzp-build/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ]" \
|
||||||
|
|| die "libwzp_android.so not found after build"
|
||||||
|
|
||||||
|
local so_size
|
||||||
|
so_size=$(ssh_cmd "du -h /root/wzp-build/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so | cut -f1")
|
||||||
|
echo " .so: $so_size"
|
||||||
|
|
||||||
|
# Generate debug keystore if missing
|
||||||
|
ssh_cmd "[ -f /root/wzp-build/android/keystore/wzp-debug.jks ] || \
|
||||||
|
(mkdir -p /root/wzp-build/android/keystore && \
|
||||||
|
keytool -genkey -v \
|
||||||
|
-keystore /root/wzp-build/android/keystore/wzp-debug.jks \
|
||||||
|
-keyalg RSA -keysize 2048 -validity 10000 \
|
||||||
|
-alias wzp-debug -storepass android -keypass android \
|
||||||
|
-dname 'CN=WZP Debug' > /dev/null 2>&1)"
|
||||||
|
|
||||||
|
# Build debug APK
|
||||||
|
log "Building debug APK..."
|
||||||
|
ssh_cmd "export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 && \
|
||||||
|
export ANDROID_HOME=\$HOME/android-sdk && \
|
||||||
|
cd /root/wzp-build/android && \
|
||||||
|
chmod +x ./gradlew && \
|
||||||
|
./gradlew assembleDebug --no-daemon --warning-mode=none 2>&1" | tail -3 \
|
||||||
|
|| die "Debug APK build failed"
|
||||||
|
|
||||||
|
# Build release APK (uses debug keystore for now)
|
||||||
|
log "Building release APK..."
|
||||||
|
# Copy debug keystore as release keystore (same password in build.gradle)
|
||||||
|
ssh_cmd "cp /root/wzp-build/android/keystore/wzp-debug.jks /root/wzp-build/android/keystore/wzp-release.jks 2>/dev/null; true"
|
||||||
|
ssh_cmd "export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 && \
|
||||||
|
export ANDROID_HOME=\$HOME/android-sdk && \
|
||||||
|
cd /root/wzp-build/android && \
|
||||||
|
./gradlew assembleRelease --no-daemon --warning-mode=none 2>&1" | tail -3 \
|
||||||
|
|| echo " (release APK failed — debug APK still available)"
|
||||||
|
|
||||||
|
log "Build complete!"
|
||||||
|
ssh_cmd "find /root/wzp-build/android -name '*.apk' -path '*/outputs/apk/*' -exec ls -lh {} \;"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --transfer: Download APKs to local machine
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
do_transfer() {
|
||||||
|
log "Downloading APKs..."
|
||||||
|
mkdir -p "$OUTPUT_DIR"
|
||||||
|
|
||||||
|
local ip
|
||||||
|
ip=$(get_vm_ip)
|
||||||
|
|
||||||
|
# Debug APK
|
||||||
|
local debug_apk
|
||||||
|
debug_apk=$(ssh_cmd "find /root/wzp-build/android -name 'app-debug*.apk' -path '*/outputs/apk/*' | head -1")
|
||||||
|
if [ -n "$debug_apk" ]; then
|
||||||
|
scp_down "$debug_apk" "$OUTPUT_DIR/wzp-debug.apk"
|
||||||
|
echo " debug: $OUTPUT_DIR/wzp-debug.apk ($(du -h "$OUTPUT_DIR/wzp-debug.apk" | cut -f1))"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Release APK
|
||||||
|
local release_apk
|
||||||
|
release_apk=$(ssh_cmd "find /root/wzp-build/android -name 'app-release*.apk' -path '*/outputs/apk/*' | head -1" || true)
|
||||||
|
if [ -n "$release_apk" ]; then
|
||||||
|
scp_down "$release_apk" "$OUTPUT_DIR/wzp-release.apk"
|
||||||
|
echo " release: $OUTPUT_DIR/wzp-release.apk ($(du -h "$OUTPUT_DIR/wzp-release.apk" | cut -f1))"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Also copy the .so for inspection
|
||||||
|
scp_down "/root/wzp-build/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so" "$OUTPUT_DIR/libwzp_android.so"
|
||||||
|
echo " .so: $OUTPUT_DIR/libwzp_android.so"
|
||||||
|
|
||||||
|
log "Transfer complete!"
|
||||||
|
echo ""
|
||||||
|
echo " Install debug: adb install -r $OUTPUT_DIR/wzp-debug.apk"
|
||||||
|
[ -f "$OUTPUT_DIR/wzp-release.apk" ] && echo " Install release: adb install -r $OUTPUT_DIR/wzp-release.apk"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --destroy: Delete the VM
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
do_destroy() {
|
||||||
|
local name
|
||||||
|
name=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||||
|
if [ -z "$name" ]; then
|
||||||
|
echo "No VM to destroy."
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
log "Deleting VM: $name"
|
||||||
|
hcloud server delete "$name"
|
||||||
|
echo " Done."
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Full build: create → build → transfer → destroy
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
do_full() {
|
||||||
|
trap 'err "Build failed!"; do_destroy_quiet; exit 1' ERR
|
||||||
|
|
||||||
|
do_prepare
|
||||||
|
|
||||||
|
# Disable trap during build — release APK failure is non-fatal
|
||||||
|
trap - ERR
|
||||||
|
do_build
|
||||||
|
do_transfer
|
||||||
|
trap 'err "Build failed!"; do_destroy_quiet; exit 1' ERR
|
||||||
|
|
||||||
|
if [ "$KEEP_VM" = "1" ]; then
|
||||||
|
log "VM kept alive (WZP_KEEP_VM=1). Destroy with: $0 --destroy"
|
||||||
|
else
|
||||||
|
do_destroy
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "All done!"
|
||||||
|
echo ""
|
||||||
|
echo " ┌──────────────────────────────────────────────────┐"
|
||||||
|
echo " │ Debug APK: $OUTPUT_DIR/wzp-debug.apk"
|
||||||
|
[ -f "$OUTPUT_DIR/wzp-release.apk" ] && \
|
||||||
|
echo " │ Release APK: $OUTPUT_DIR/wzp-release.apk"
|
||||||
|
echo " │"
|
||||||
|
echo " │ Install: adb install -r $OUTPUT_DIR/wzp-debug.apk"
|
||||||
|
echo " └──────────────────────────────────────────────────┘"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Main
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
case "${1:-}" in
|
||||||
|
--prepare) do_prepare ;;
|
||||||
|
--build) do_build ;;
|
||||||
|
--transfer) do_transfer ;;
|
||||||
|
--destroy) do_destroy ;;
|
||||||
|
--upload) do_upload ;;
|
||||||
|
--all)
|
||||||
|
do_prepare
|
||||||
|
do_build
|
||||||
|
do_transfer
|
||||||
|
log "VM still running. Destroy with: $0 --destroy"
|
||||||
|
;;
|
||||||
|
"")
|
||||||
|
do_full
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 [--prepare|--build|--transfer|--destroy|--all|--upload]"
|
||||||
|
echo ""
|
||||||
|
echo " (no args) Full build: create VM → build → download → destroy VM"
|
||||||
|
echo " --prepare Create VM and install deps"
|
||||||
|
echo " --build Build on existing VM"
|
||||||
|
echo " --transfer Download APKs from VM"
|
||||||
|
echo " --destroy Delete the VM"
|
||||||
|
echo " --all prepare + build + transfer (VM persists)"
|
||||||
|
echo " --upload Re-upload source to existing VM"
|
||||||
|
echo ""
|
||||||
|
echo "Environment:"
|
||||||
|
echo " WZP_BRANCH=$BRANCH"
|
||||||
|
echo " WZP_SERVER_TYPE=$SERVER_TYPE"
|
||||||
|
echo " WZP_KEEP_VM=$KEEP_VM (set to 1 to skip auto-destroy)"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
416
scripts/build-android-docker.sh
Executable file
416
scripts/build-android-docker.sh
Executable file
@@ -0,0 +1,416 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# WZ Phone — Android APK build via Docker on remote host
|
||||||
|
#
|
||||||
|
# Replaces Hetzner Cloud VMs with a Docker container on SepehrHomeserverdk.
|
||||||
|
# Persistent storage at /mnt/storage/manBuilder/data/{source,cache,keystore}.
|
||||||
|
# Uploads APKs to rustypaste, then SCPs them back locally.
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# - SSH config has "SepehrHomeserverdk" host entry
|
||||||
|
# - SSH agent running with keys for both remote host and git.manko.yoga
|
||||||
|
# - Docker installed on remote host
|
||||||
|
# - /mnt/storage/manBuilder/.env with rusty_address and rusty_auth_token
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./scripts/build-android-docker.sh Full: prepare+pull+build+upload+transfer
|
||||||
|
# ./scripts/build-android-docker.sh --prepare Build Docker image + sync keystores
|
||||||
|
# ./scripts/build-android-docker.sh --pull Clone/update source from Gitea
|
||||||
|
# ./scripts/build-android-docker.sh --build Build debug APK inside Docker
|
||||||
|
# ./scripts/build-android-docker.sh --upload Upload APKs to rustypaste
|
||||||
|
# ./scripts/build-android-docker.sh --transfer SCP APKs back to local machine
|
||||||
|
# ./scripts/build-android-docker.sh --all pull+build+upload+transfer (image ready)
|
||||||
|
#
|
||||||
|
# Add --release to also build release APK:
|
||||||
|
# ./scripts/build-android-docker.sh --build --release
|
||||||
|
# ./scripts/build-android-docker.sh --all --release
|
||||||
|
# ./scripts/build-android-docker.sh --release (full pipeline, debug+release)
|
||||||
|
#
|
||||||
|
# Environment variables (all optional):
|
||||||
|
# WZP_BRANCH Branch to build (default: feat/android-voip-client)
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
REMOTE_HOST="SepehrHomeserverdk"
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
REPO_URL="ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git"
|
||||||
|
BRANCH="${WZP_BRANCH:-feat/android-voip-client}"
|
||||||
|
DOCKER_IMAGE="wzp-android-builder"
|
||||||
|
LOCAL_OUTPUT_DIR="target/android-apk"
|
||||||
|
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
||||||
|
LOCAL_KEYSTORE_DIR="$PROJECT_DIR/android/keystore"
|
||||||
|
|
||||||
|
SSH_OPTS="-o ConnectTimeout=10 -o LogLevel=ERROR -o ServerAliveInterval=15 -o ServerAliveCountMax=4"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Helpers
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
log() { echo -e "\n\033[1;36m>>> $*\033[0m"; }
|
||||||
|
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||||
|
|
||||||
|
ssh_cmd() {
|
||||||
|
ssh -A $SSH_OPTS "$REMOTE_HOST" "$@"
|
||||||
|
}
|
||||||
|
|
||||||
|
push_reminder() {
|
||||||
|
echo ""
|
||||||
|
echo " ┌──────────────────────────────────────────────────────────────────┐"
|
||||||
|
echo " │ IMPORTANT: Push your changes to origin (Gitea) before build! │"
|
||||||
|
echo " │ │"
|
||||||
|
echo " │ The build fetches from: │"
|
||||||
|
echo " │ ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git │"
|
||||||
|
echo " │ │"
|
||||||
|
echo " │ Run: git push origin $BRANCH"
|
||||||
|
echo " └──────────────────────────────────────────────────────────────────┘"
|
||||||
|
echo ""
|
||||||
|
read -r -p "Press Enter to continue (Ctrl-C to abort)... "
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --prepare: Create remote dirs, build Docker image, sync keystores
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
do_prepare() {
|
||||||
|
log "Preparing remote environment..."
|
||||||
|
ssh_cmd "mkdir -p $BASE_DIR/data/{source,cache/cargo-registry,cache/cargo-git,cache/target,cache/gradle,keystore}"
|
||||||
|
|
||||||
|
# Sync keystores (gitignored — won't exist after clone)
|
||||||
|
REMOTE_HAS_KEYSTORE=$(ssh_cmd "[ -f $BASE_DIR/data/keystore/wzp-debug.jks ] && echo yes || echo no")
|
||||||
|
if [ "$REMOTE_HAS_KEYSTORE" = "no" ]; then
|
||||||
|
if [ -f "$LOCAL_KEYSTORE_DIR/wzp-debug.jks" ]; then
|
||||||
|
log "Uploading keystores to remote persistent storage..."
|
||||||
|
scp $SSH_OPTS \
|
||||||
|
"$LOCAL_KEYSTORE_DIR/wzp-debug.jks" \
|
||||||
|
"$LOCAL_KEYSTORE_DIR/wzp-release.jks" \
|
||||||
|
"$REMOTE_HOST:$BASE_DIR/data/keystore/"
|
||||||
|
echo " Keystores uploaded to $BASE_DIR/data/keystore/"
|
||||||
|
else
|
||||||
|
err "No keystores found locally at $LOCAL_KEYSTORE_DIR/"
|
||||||
|
err "Build will generate a temporary debug keystore instead."
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " Keystores already on remote."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Upload Dockerfile from local (always use local version — no git dependency)
|
||||||
|
log "Uploading Dockerfile to remote..."
|
||||||
|
ssh_cmd "mkdir -p $BASE_DIR/data/source/scripts"
|
||||||
|
scp $SSH_OPTS \
|
||||||
|
"$PROJECT_DIR/scripts/Dockerfile.android-builder" \
|
||||||
|
"$REMOTE_HOST:$BASE_DIR/data/source/scripts/Dockerfile.android-builder"
|
||||||
|
|
||||||
|
# Build Docker image
|
||||||
|
log "Building Docker image (Debian 12 + Rust + Android SDK/NDK)..."
|
||||||
|
ssh_cmd bash <<IMAGE_EOF
|
||||||
|
set -euo pipefail
|
||||||
|
docker build -t "$DOCKER_IMAGE" - < "$BASE_DIR/data/source/scripts/Dockerfile.android-builder"
|
||||||
|
echo " Docker image '$DOCKER_IMAGE' ready."
|
||||||
|
IMAGE_EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --pull: Clone or update source from Gitea
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
do_pull() {
|
||||||
|
push_reminder
|
||||||
|
|
||||||
|
log "Updating source (branch: $BRANCH)..."
|
||||||
|
ssh_cmd bash <<PULL_EOF
|
||||||
|
set -euo pipefail
|
||||||
|
mkdir -p "$BASE_DIR/data/source" \
|
||||||
|
"$BASE_DIR/data/cache/cargo-registry" \
|
||||||
|
"$BASE_DIR/data/cache/cargo-git" \
|
||||||
|
"$BASE_DIR/data/cache/target" \
|
||||||
|
"$BASE_DIR/data/cache/gradle" \
|
||||||
|
"$BASE_DIR/data/keystore"
|
||||||
|
cd "$BASE_DIR/data/source"
|
||||||
|
if [ -d .git ]; then
|
||||||
|
echo " Fetching origin..."
|
||||||
|
git fetch origin
|
||||||
|
git checkout "$BRANCH" 2>/dev/null || git checkout -b "$BRANCH" "origin/$BRANCH"
|
||||||
|
git reset --hard "origin/$BRANCH"
|
||||||
|
else
|
||||||
|
echo " Cloning repo..."
|
||||||
|
cd "$BASE_DIR/data"
|
||||||
|
rm -rf source
|
||||||
|
git clone --branch "$BRANCH" "$REPO_URL" source
|
||||||
|
cd source
|
||||||
|
fi
|
||||||
|
git submodule update --init || true
|
||||||
|
echo " HEAD: \$(git log --oneline -1)"
|
||||||
|
echo " Branch: \$(git branch --show-current)"
|
||||||
|
PULL_EOF
|
||||||
|
|
||||||
|
# Inject keystores into source tree
|
||||||
|
log "Injecting keystores into source tree..."
|
||||||
|
ssh_cmd bash <<KS_EOF
|
||||||
|
set -euo pipefail
|
||||||
|
mkdir -p "$BASE_DIR/data/source/android/keystore"
|
||||||
|
if [ -f "$BASE_DIR/data/keystore/wzp-debug.jks" ]; then
|
||||||
|
cp "$BASE_DIR/data/keystore/wzp-debug.jks" "$BASE_DIR/data/source/android/keystore/"
|
||||||
|
cp "$BASE_DIR/data/keystore/wzp-release.jks" "$BASE_DIR/data/source/android/keystore/"
|
||||||
|
echo " Keystores ready (wzp-debug.jks + wzp-release.jks)"
|
||||||
|
else
|
||||||
|
echo " WARNING: No keystores in persistent storage — build will generate temporary ones"
|
||||||
|
fi
|
||||||
|
KS_EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --build: Build APK inside Docker container
|
||||||
|
# $1 = "1" to also build release APK (default: debug only)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
do_build() {
|
||||||
|
local build_release="${1:-0}"
|
||||||
|
|
||||||
|
if [ "$build_release" = "1" ]; then
|
||||||
|
log "Building debug + release APKs inside Docker container..."
|
||||||
|
else
|
||||||
|
log "Building debug APK inside Docker container..."
|
||||||
|
fi
|
||||||
|
|
||||||
|
ssh_cmd bash <<BUILD_EOF
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Ensure uid 1000 can write to mounted volumes
|
||||||
|
# Use find to only chown files not already 1000:1000, ignore errors on stubborn files
|
||||||
|
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache" \
|
||||||
|
! -user 1000 -o ! -group 1000 2>/dev/null | \
|
||||||
|
xargs -r chown 1000:1000 2>/dev/null || true
|
||||||
|
|
||||||
|
docker run --rm \
|
||||||
|
--user 1000:1000 \
|
||||||
|
-e BUILD_RELEASE="$build_release" \
|
||||||
|
-v "$BASE_DIR/data/source:/build/source" \
|
||||||
|
-v "$BASE_DIR/data/cache/cargo-registry:/home/builder/.cargo/registry" \
|
||||||
|
-v "$BASE_DIR/data/cache/cargo-git:/home/builder/.cargo/git" \
|
||||||
|
-v "$BASE_DIR/data/cache/target:/build/source/target" \
|
||||||
|
-v "$BASE_DIR/data/cache/gradle:/home/builder/.gradle" \
|
||||||
|
"$DOCKER_IMAGE" \
|
||||||
|
bash -c '
|
||||||
|
set -euo pipefail
|
||||||
|
cd /build/source
|
||||||
|
|
||||||
|
echo ">>> Building Rust native library (arm64-v8a, release)..."
|
||||||
|
|
||||||
|
# Clean stale jniLibs so cargo-ndk re-copies libc++_shared.so
|
||||||
|
rm -rf android/app/src/main/jniLibs/arm64-v8a
|
||||||
|
|
||||||
|
cargo ndk -t arm64-v8a \
|
||||||
|
-o android/app/src/main/jniLibs \
|
||||||
|
build --release -p wzp-android 2>&1 | tail -10
|
||||||
|
|
||||||
|
[ -f android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ] || {
|
||||||
|
echo "ERROR: libwzp_android.so not found after build"; exit 1;
|
||||||
|
}
|
||||||
|
echo " .so size: \$(du -h android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so | cut -f1)"
|
||||||
|
|
||||||
|
# Verify keystores exist (should have been injected by --pull)
|
||||||
|
if [ -f android/keystore/wzp-debug.jks ] && [ -f android/keystore/wzp-release.jks ]; then
|
||||||
|
echo " Keystores: wzp-debug.jks + wzp-release.jks (from persistent storage)"
|
||||||
|
else
|
||||||
|
echo "WARNING: Keystores missing — generating temporary debug keystore..."
|
||||||
|
mkdir -p android/keystore
|
||||||
|
keytool -genkey -v \
|
||||||
|
-keystore android/keystore/wzp-debug.jks \
|
||||||
|
-keyalg RSA -keysize 2048 -validity 10000 \
|
||||||
|
-alias wzp-debug -storepass android -keypass android \
|
||||||
|
-dname "CN=WZP Debug" 2>&1 | tail -1
|
||||||
|
cp android/keystore/wzp-debug.jks android/keystore/wzp-release.jks
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd android
|
||||||
|
chmod +x ./gradlew
|
||||||
|
|
||||||
|
echo ">>> Building debug APK..."
|
||||||
|
./gradlew assembleDebug --no-daemon --warning-mode=none 2>&1 | tail -5
|
||||||
|
|
||||||
|
if [ "\${BUILD_RELEASE}" = "1" ]; then
|
||||||
|
echo ">>> Building release APK..."
|
||||||
|
./gradlew assembleRelease --no-daemon --warning-mode=none 2>&1 | tail -5 || \
|
||||||
|
echo " (release build failed — debug APK still available)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo ">>> Build artifacts:"
|
||||||
|
find . -name "*.apk" -path "*/outputs/apk/*" -exec ls -lh {} \;
|
||||||
|
'
|
||||||
|
BUILD_EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --upload: Upload APKs to rustypaste
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
do_upload() {
|
||||||
|
log "Uploading APKs to rustypaste..."
|
||||||
|
|
||||||
|
UPLOAD_RESULT=$(ssh_cmd bash <<'UPLOAD_EOF'
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
ENV_FILE="$BASE_DIR/.env"
|
||||||
|
|
||||||
|
if [ ! -f "$ENV_FILE" ]; then
|
||||||
|
echo "ERROR: $ENV_FILE not found — create it with rusty_address and rusty_auth_token" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
source "$ENV_FILE"
|
||||||
|
|
||||||
|
if [ -z "${rusty_address:-}" ] || [ -z "${rusty_auth_token:-}" ]; then
|
||||||
|
echo "ERROR: rusty_address or rusty_auth_token not set in $ENV_FILE" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
upload_apk() {
|
||||||
|
local apk="$1" label="$2"
|
||||||
|
if [ -f "$apk" ]; then
|
||||||
|
local url
|
||||||
|
url=$(curl -s -F "file=@$apk" -H "Authorization: $rusty_auth_token" "$rusty_address")
|
||||||
|
echo "$label: $url"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
DEBUG_APK=$(find "$BASE_DIR/data/source/android" -name "app-debug*.apk" -path "*/outputs/apk/*" 2>/dev/null | head -1)
|
||||||
|
RELEASE_APK=$(find "$BASE_DIR/data/source/android" -name "app-release*.apk" -path "*/outputs/apk/*" 2>/dev/null | head -1)
|
||||||
|
|
||||||
|
upload_apk "${DEBUG_APK:-}" "debug"
|
||||||
|
upload_apk "${RELEASE_APK:-}" "release"
|
||||||
|
UPLOAD_EOF
|
||||||
|
)
|
||||||
|
|
||||||
|
echo "$UPLOAD_RESULT"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# --transfer: SCP APKs back to local machine
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
do_transfer() {
|
||||||
|
log "Downloading APKs to local machine..."
|
||||||
|
|
||||||
|
mkdir -p "$LOCAL_OUTPUT_DIR"
|
||||||
|
|
||||||
|
# Debug APK
|
||||||
|
DEBUG_REMOTE=$(ssh_cmd "find $BASE_DIR/data/source/android -name 'app-debug*.apk' -path '*/outputs/apk/*' 2>/dev/null | head -1" || true)
|
||||||
|
if [ -n "$DEBUG_REMOTE" ]; then
|
||||||
|
scp $SSH_OPTS "$REMOTE_HOST:$DEBUG_REMOTE" "$LOCAL_OUTPUT_DIR/wzp-debug.apk"
|
||||||
|
echo " debug: $LOCAL_OUTPUT_DIR/wzp-debug.apk ($(du -h "$LOCAL_OUTPUT_DIR/wzp-debug.apk" | cut -f1))"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Release APK
|
||||||
|
RELEASE_REMOTE=$(ssh_cmd "find $BASE_DIR/data/source/android -name 'app-release*.apk' -path '*/outputs/apk/*' 2>/dev/null | head -1" || true)
|
||||||
|
if [ -n "$RELEASE_REMOTE" ]; then
|
||||||
|
scp $SSH_OPTS "$REMOTE_HOST:$RELEASE_REMOTE" "$LOCAL_OUTPUT_DIR/wzp-release.apk"
|
||||||
|
echo " release: $LOCAL_OUTPUT_DIR/wzp-release.apk ($(du -h "$LOCAL_OUTPUT_DIR/wzp-release.apk" | cut -f1))"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Also grab the .so
|
||||||
|
scp $SSH_OPTS "$REMOTE_HOST:$BASE_DIR/data/source/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so" \
|
||||||
|
"$LOCAL_OUTPUT_DIR/libwzp_android.so" 2>/dev/null \
|
||||||
|
&& echo " .so: $LOCAL_OUTPUT_DIR/libwzp_android.so" || true
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Summary banner
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
show_summary() {
|
||||||
|
log "All done!"
|
||||||
|
echo ""
|
||||||
|
echo " ┌──────────────────────────────────────────────────────────────┐"
|
||||||
|
[ -f "$LOCAL_OUTPUT_DIR/wzp-debug.apk" ] && \
|
||||||
|
echo " │ Debug APK: $LOCAL_OUTPUT_DIR/wzp-debug.apk"
|
||||||
|
[ -f "$LOCAL_OUTPUT_DIR/wzp-release.apk" ] && \
|
||||||
|
echo " │ Release APK: $LOCAL_OUTPUT_DIR/wzp-release.apk"
|
||||||
|
echo " │"
|
||||||
|
if [ -n "${UPLOAD_RESULT:-}" ]; then
|
||||||
|
echo " │ Rustypaste:"
|
||||||
|
echo "$UPLOAD_RESULT" | while read -r line; do
|
||||||
|
echo " │ $line"
|
||||||
|
done
|
||||||
|
echo " │"
|
||||||
|
fi
|
||||||
|
echo " │ Install: adb install -r $LOCAL_OUTPUT_DIR/wzp-debug.apk"
|
||||||
|
echo " └──────────────────────────────────────────────────────────────┘"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Parse arguments
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
ACTION=""
|
||||||
|
BUILD_RELEASE=0
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--release) BUILD_RELEASE=1 ;;
|
||||||
|
--prepare|--pull|--build|--upload|--transfer|--all)
|
||||||
|
if [ -n "$ACTION" ]; then
|
||||||
|
err "Multiple actions specified: $ACTION and $arg"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
ACTION="$arg"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 [--prepare|--pull|--build|--upload|--transfer|--all] [--release]"
|
||||||
|
echo ""
|
||||||
|
echo "Actions:"
|
||||||
|
echo " (no action) Full pipeline: pull → prepare → build → upload → transfer"
|
||||||
|
echo " --prepare Build Docker image + sync keystores to remote"
|
||||||
|
echo " --pull Clone/update source from Gitea + inject keystores"
|
||||||
|
echo " --build Build debug APK inside Docker container"
|
||||||
|
echo " --upload Upload APKs to rustypaste"
|
||||||
|
echo " --transfer SCP APKs + .so back to local machine"
|
||||||
|
echo " --all pull → build → upload → transfer (Docker image ready)"
|
||||||
|
echo ""
|
||||||
|
echo "Flags:"
|
||||||
|
echo " --release Also build release APK (default: debug only)"
|
||||||
|
echo ""
|
||||||
|
echo "Examples:"
|
||||||
|
echo " $0 # full pipeline, debug only"
|
||||||
|
echo " $0 --release # full pipeline, debug + release"
|
||||||
|
echo " $0 --build # debug APK only"
|
||||||
|
echo " $0 --build --release # debug + release APKs"
|
||||||
|
echo " $0 --all # iterate: pull+build+upload+transfer (debug)"
|
||||||
|
echo " $0 --all --release # iterate with release too"
|
||||||
|
echo ""
|
||||||
|
echo "Environment:"
|
||||||
|
echo " WZP_BRANCH=$BRANCH"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Dispatch
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
case "${ACTION:-}" in
|
||||||
|
--prepare)
|
||||||
|
do_prepare
|
||||||
|
;;
|
||||||
|
--pull)
|
||||||
|
do_pull
|
||||||
|
;;
|
||||||
|
--build)
|
||||||
|
do_build "$BUILD_RELEASE"
|
||||||
|
;;
|
||||||
|
--upload)
|
||||||
|
do_upload
|
||||||
|
;;
|
||||||
|
--transfer)
|
||||||
|
do_transfer
|
||||||
|
;;
|
||||||
|
--all)
|
||||||
|
do_pull
|
||||||
|
do_build "$BUILD_RELEASE"
|
||||||
|
do_upload
|
||||||
|
do_transfer
|
||||||
|
show_summary
|
||||||
|
;;
|
||||||
|
"")
|
||||||
|
do_pull
|
||||||
|
do_prepare
|
||||||
|
do_build "$BUILD_RELEASE"
|
||||||
|
do_upload
|
||||||
|
do_transfer
|
||||||
|
show_summary
|
||||||
|
;;
|
||||||
|
esac
|
||||||
240
scripts/build-android.sh
Executable file
240
scripts/build-android.sh
Executable file
@@ -0,0 +1,240 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# =============================================================================
|
||||||
|
# WZ Phone — Android APK build script for Debian 12 (Bookworm)
|
||||||
|
#
|
||||||
|
# Sets up a complete build environment from scratch and produces a debug APK.
|
||||||
|
# Idempotent — safe to run multiple times (skips already-installed components).
|
||||||
|
#
|
||||||
|
# Tested on: Debian 12 x86_64, cross-compiling to aarch64-linux-android
|
||||||
|
#
|
||||||
|
# Why these specific versions:
|
||||||
|
#
|
||||||
|
# cmake 3.25-3.28 (system package from apt)
|
||||||
|
# cmake 3.25 (Debian 12) and 3.28 (Ubuntu 24.04) both work.
|
||||||
|
# cmake 3.31+ has armv7/aarch64 flag conflicts in Android-Determine.cmake.
|
||||||
|
# cmake 4.x drops cmake_minimum_required < 3.5.
|
||||||
|
# Do NOT use pip cmake — it bundles its own modules with different bugs.
|
||||||
|
# CRITICAL: must set ANDROID_NDK=$ANDROID_NDK_HOME (cmake checks ANDROID_NDK).
|
||||||
|
#
|
||||||
|
# NDK 26.1.10909125 (r26b)
|
||||||
|
# NDK 27+ ships a newer libc++_shared.so with different scudo allocator
|
||||||
|
# defaults. On Android 16 devices with MTE (Memory Tagging Extension)
|
||||||
|
# enabled (e.g. Nothing A059), NDK 27's scudo crashes during malloc/calloc.
|
||||||
|
# NDK 26.1 is the last stable version for these devices.
|
||||||
|
# Matches build.gradle.kts: ndkVersion = "26.1.10909125"
|
||||||
|
#
|
||||||
|
# JDK 17 (openjdk-17-jdk-headless)
|
||||||
|
# Gradle 8.5 + AGP 8.2.0 officially support JDK 17.
|
||||||
|
# JDK 21 works for compilation but has Gradle daemon compat issues.
|
||||||
|
#
|
||||||
|
# Rust stable (currently 1.94.1)
|
||||||
|
# Edition 2024, MSRV 1.85. Stable channel is fine.
|
||||||
|
#
|
||||||
|
# ANDROID_NDK=$ANDROID_NDK_HOME (BOTH must be set)
|
||||||
|
# cmake's Android platform module checks ANDROID_NDK (no _HOME suffix).
|
||||||
|
# cargo-ndk sets ANDROID_NDK_HOME. Both must point to the same path.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# chmod +x scripts/build-android.sh
|
||||||
|
# ./scripts/build-android.sh # build from current tree
|
||||||
|
# WZP_CLONE=1 ./scripts/build-android.sh # clone fresh from git
|
||||||
|
# WZP_COMMIT=2092245 ./scripts/build-android.sh # pin to specific commit
|
||||||
|
#
|
||||||
|
# Environment variables (all optional):
|
||||||
|
# WZP_CLONE Set to 1 to clone from git instead of using current dir
|
||||||
|
# WZP_REPO Git clone URL (default: ssh://git@git.manko.yoga:222/manawenuz/wz-phone)
|
||||||
|
# WZP_BRANCH Branch to checkout (default: feat/android-voip-client)
|
||||||
|
# WZP_COMMIT Commit to pin to (default: HEAD)
|
||||||
|
# WZP_WORKDIR Build directory (default: /tmp/wzp-build)
|
||||||
|
# ANDROID_API SDK platform level (default: 34)
|
||||||
|
# NDK_VERSION NDK version string (default: 26.1.10909125)
|
||||||
|
# =============================================================================
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Configuration
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
CLONE="${WZP_CLONE:-0}"
|
||||||
|
REPO="${WZP_REPO:-ssh://git@git.manko.yoga:222/manawenuz/wz-phone}"
|
||||||
|
BRANCH="${WZP_BRANCH:-feat/android-voip-client}"
|
||||||
|
COMMIT="${WZP_COMMIT:-}"
|
||||||
|
WORKDIR="${WZP_WORKDIR:-/tmp/wzp-build}"
|
||||||
|
ANDROID_API="${ANDROID_API:-34}"
|
||||||
|
NDK_VERSION="${NDK_VERSION:-26.1.10909125}"
|
||||||
|
|
||||||
|
ANDROID_HOME="${ANDROID_HOME:-$HOME/android-sdk}"
|
||||||
|
ANDROID_NDK_HOME="$ANDROID_HOME/ndk/$NDK_VERSION"
|
||||||
|
# cmake checks ANDROID_NDK (not _HOME) — both must be set
|
||||||
|
ANDROID_NDK="$ANDROID_NDK_HOME"
|
||||||
|
JAVA_HOME="/usr/lib/jvm/java-17-openjdk-$(dpkg --print-architecture)"
|
||||||
|
CMDLINE_TOOLS_URL="https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip"
|
||||||
|
|
||||||
|
export ANDROID_HOME ANDROID_NDK_HOME ANDROID_NDK JAVA_HOME
|
||||||
|
export PATH="$JAVA_HOME/bin:$ANDROID_HOME/cmdline-tools/latest/bin:$ANDROID_HOME/platform-tools:$HOME/.cargo/bin:$PATH"
|
||||||
|
|
||||||
|
log() { echo -e "\n\033[1;36m>>> $*\033[0m"; }
|
||||||
|
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; exit 1; }
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Step 1: System packages (cmake 3.25, JDK 17, make, git, etc.)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
log "Installing system packages"
|
||||||
|
export DEBIAN_FRONTEND=noninteractive
|
||||||
|
apt-get update -qq
|
||||||
|
apt-get install -y -qq \
|
||||||
|
build-essential \
|
||||||
|
cmake \
|
||||||
|
curl \
|
||||||
|
git \
|
||||||
|
libssl-dev \
|
||||||
|
pkg-config \
|
||||||
|
unzip \
|
||||||
|
wget \
|
||||||
|
zip \
|
||||||
|
openjdk-17-jdk-headless \
|
||||||
|
2>/dev/null
|
||||||
|
|
||||||
|
# Verify critical versions
|
||||||
|
log "Verifying build environment"
|
||||||
|
echo " cmake: $(cmake --version | head -1)"
|
||||||
|
echo " java: $(java -version 2>&1 | head -1)"
|
||||||
|
echo " make: $(make --version | head -1)"
|
||||||
|
|
||||||
|
CMAKE_MAJOR=$(cmake --version | head -1 | grep -oP '\d+' | head -1)
|
||||||
|
CMAKE_MINOR=$(cmake --version | head -1 | grep -oP '\d+' | sed -n '2p')
|
||||||
|
if [ "$CMAKE_MAJOR" -gt 3 ] || { [ "$CMAKE_MAJOR" -eq 3 ] && [ "$CMAKE_MINOR" -gt 30 ]; }; then
|
||||||
|
err "cmake $(cmake --version | head -1) is too new! Need cmake <= 3.28.x. cmake 3.31+ has Android cross-compilation bugs."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Step 2: Rust toolchain
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
log "Setting up Rust toolchain"
|
||||||
|
if ! command -v rustup &>/dev/null; then
|
||||||
|
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable
|
||||||
|
source "$HOME/.cargo/env"
|
||||||
|
fi
|
||||||
|
rustup default stable
|
||||||
|
rustup target add aarch64-linux-android
|
||||||
|
echo " rustc: $(rustc --version)"
|
||||||
|
echo " cargo: $(cargo --version)"
|
||||||
|
|
||||||
|
if ! command -v cargo-ndk &>/dev/null; then
|
||||||
|
log "Installing cargo-ndk"
|
||||||
|
cargo install cargo-ndk
|
||||||
|
fi
|
||||||
|
echo " ndk: $(cargo ndk --version)"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Step 3: Android SDK + NDK 26.1
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
log "Setting up Android SDK + NDK $NDK_VERSION"
|
||||||
|
if [ ! -f "$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager" ]; then
|
||||||
|
log "Downloading Android command-line tools"
|
||||||
|
mkdir -p "$ANDROID_HOME/cmdline-tools"
|
||||||
|
TMPZIP=$(mktemp /tmp/cmdline-tools-XXXXX.zip)
|
||||||
|
wget -q -O "$TMPZIP" "$CMDLINE_TOOLS_URL"
|
||||||
|
unzip -qo "$TMPZIP" -d "$ANDROID_HOME/cmdline-tools"
|
||||||
|
mv "$ANDROID_HOME/cmdline-tools/cmdline-tools" "$ANDROID_HOME/cmdline-tools/latest" 2>/dev/null || true
|
||||||
|
rm -f "$TMPZIP"
|
||||||
|
fi
|
||||||
|
|
||||||
|
yes | sdkmanager --licenses >/dev/null 2>&1 || true
|
||||||
|
|
||||||
|
if [ ! -d "$ANDROID_NDK_HOME" ]; then
|
||||||
|
log "Installing NDK $NDK_VERSION (this takes a few minutes)"
|
||||||
|
sdkmanager --install \
|
||||||
|
"platforms;android-${ANDROID_API}" \
|
||||||
|
"build-tools;${ANDROID_API}.0.0" \
|
||||||
|
"ndk;${NDK_VERSION}" \
|
||||||
|
"platform-tools" \
|
||||||
|
2>&1 | grep -v "^\[" || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
[ -d "$ANDROID_NDK_HOME" ] || err "NDK not found at $ANDROID_NDK_HOME"
|
||||||
|
echo " NDK: $ANDROID_NDK_HOME"
|
||||||
|
echo " SDK: $ANDROID_HOME"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Step 4: Source code
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
if [ "$CLONE" = "1" ]; then
|
||||||
|
log "Cloning $REPO (branch: $BRANCH)"
|
||||||
|
if [ -d "$WORKDIR/.git" ]; then
|
||||||
|
cd "$WORKDIR"
|
||||||
|
git fetch origin
|
||||||
|
else
|
||||||
|
rm -rf "$WORKDIR"
|
||||||
|
git clone --branch "$BRANCH" --recurse-submodules "$REPO" "$WORKDIR"
|
||||||
|
cd "$WORKDIR"
|
||||||
|
fi
|
||||||
|
git checkout "$BRANCH"
|
||||||
|
git pull origin "$BRANCH" || true
|
||||||
|
git submodule update --init --recursive
|
||||||
|
|
||||||
|
if [ -n "$COMMIT" ]; then
|
||||||
|
log "Pinning to commit $COMMIT"
|
||||||
|
git checkout "$COMMIT"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Use current directory (assume we're in the repo root)
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||||
|
WORKDIR="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||||
|
cd "$WORKDIR"
|
||||||
|
[ -f "Cargo.toml" ] || err "Not in repo root. Run from repo root or set WZP_CLONE=1"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo " HEAD: $(git log --oneline -1)"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Step 5: Build native Rust library (.so)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
log "Building Rust native library (arm64-v8a, release)"
|
||||||
|
cargo ndk -t arm64-v8a \
|
||||||
|
-o "$WORKDIR/android/app/src/main/jniLibs" \
|
||||||
|
build --release -p wzp-android
|
||||||
|
|
||||||
|
SO="$WORKDIR/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so"
|
||||||
|
[ -f "$SO" ] || err ".so not found at $SO"
|
||||||
|
echo " Built: $SO ($(du -h "$SO" | cut -f1))"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Step 6: Generate debug keystore (if missing)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
KEYSTORE="$WORKDIR/android/keystore/wzp-debug.jks"
|
||||||
|
if [ ! -f "$KEYSTORE" ]; then
|
||||||
|
log "Generating debug keystore"
|
||||||
|
mkdir -p "$(dirname "$KEYSTORE")"
|
||||||
|
keytool -genkey -v \
|
||||||
|
-keystore "$KEYSTORE" \
|
||||||
|
-keyalg RSA -keysize 2048 -validity 10000 \
|
||||||
|
-alias wzp-debug \
|
||||||
|
-storepass android -keypass android \
|
||||||
|
-dname "CN=WZP Debug" 2>&1 | tail -1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Step 7: Build Android APK
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
log "Building APK (debug)"
|
||||||
|
cd "$WORKDIR/android"
|
||||||
|
chmod +x ./gradlew
|
||||||
|
./gradlew assembleDebug --no-daemon --warning-mode=none
|
||||||
|
|
||||||
|
APK=$(find . -name "app-debug*.apk" -path "*/outputs/apk/*" | head -1)
|
||||||
|
[ -n "$APK" ] || err "APK not found"
|
||||||
|
APK_ABS="$(cd "$(dirname "$APK")" && pwd)/$(basename "$APK")"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Done
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
log "Build complete!"
|
||||||
|
echo ""
|
||||||
|
echo " ┌──────────────────────────────────────────────────────────┐"
|
||||||
|
echo " │ APK: $APK_ABS"
|
||||||
|
echo " │ Size: $(du -h "$APK_ABS" | cut -f1)"
|
||||||
|
echo " │ SHA256: $(sha256sum "$APK_ABS" | cut -d' ' -f1)"
|
||||||
|
echo " └──────────────────────────────────────────────────────────┘"
|
||||||
|
echo ""
|
||||||
|
echo " Install: adb install -r $APK_ABS"
|
||||||
|
echo ""
|
||||||
166
scripts/build-linux-docker.sh
Executable file
166
scripts/build-linux-docker.sh
Executable file
@@ -0,0 +1,166 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Build WarzonePhone Linux x86_64 binaries via Docker on SepehrHomeserverdk.
|
||||||
|
# Reuses same Docker image as Android build (has Rust + cmake + build tools).
|
||||||
|
# Fire and forget — notifies via ntfy.sh/wzp with rustypaste URL.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./scripts/build-linux-docker.sh Build + upload + notify
|
||||||
|
# ./scripts/build-linux-docker.sh --pull Git pull before building
|
||||||
|
# ./scripts/build-linux-docker.sh --clean Clean Rust target cache
|
||||||
|
# ./scripts/build-linux-docker.sh --install Download binaries locally after build
|
||||||
|
|
||||||
|
REMOTE_HOST="SepehrHomeserverdk"
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
LOCAL_OUTPUT="target/linux-x86_64"
|
||||||
|
SSH_OPTS="-o ConnectTimeout=15 -o ServerAliveInterval=15 -o ServerAliveCountMax=4 -o LogLevel=ERROR"
|
||||||
|
|
||||||
|
DO_PULL=1
|
||||||
|
DO_CLEAN=0
|
||||||
|
DO_INSTALL=0
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--pull) DO_PULL=1 ;;
|
||||||
|
--no-pull) DO_PULL=0 ;;
|
||||||
|
--clean) DO_CLEAN=1 ;;
|
||||||
|
--install) DO_INSTALL=1 ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||||
|
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||||
|
|
||||||
|
ssh_cmd() { ssh $SSH_OPTS "$REMOTE_HOST" "$@"; }
|
||||||
|
|
||||||
|
# Upload build script to remote
|
||||||
|
log "Uploading build script..."
|
||||||
|
ssh_cmd "cat > /tmp/wzp-linux-build.sh" <<'REMOTE_SCRIPT'
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
DO_PULL="${1:-0}"
|
||||||
|
DO_CLEAN="${2:-0}"
|
||||||
|
|
||||||
|
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||||
|
|
||||||
|
trap 'notify "WZP Linux build FAILED! Check /tmp/wzp-linux-build.log"' ERR
|
||||||
|
|
||||||
|
if [ "$DO_PULL" = "1" ]; then
|
||||||
|
echo ">>> Pulling latest..."
|
||||||
|
cd "$BASE_DIR/data/source"
|
||||||
|
git reset --hard HEAD 2>/dev/null || true
|
||||||
|
git clean -fd 2>/dev/null || true
|
||||||
|
git gc --prune=now 2>/dev/null || true
|
||||||
|
git fetch origin feat/android-voip-client 2>&1 | tail -3
|
||||||
|
git reset --hard origin/feat/android-voip-client 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$DO_CLEAN" = "1" ]; then
|
||||||
|
echo ">>> Cleaning Linux target cache..."
|
||||||
|
rm -rf "$BASE_DIR/data/cache-linux/target"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Ensure cache dirs exist (separate from Android cache)
|
||||||
|
mkdir -p "$BASE_DIR/data/cache-linux/target" \
|
||||||
|
"$BASE_DIR/data/cache-linux/cargo-registry" \
|
||||||
|
"$BASE_DIR/data/cache-linux/cargo-git"
|
||||||
|
|
||||||
|
# Fix perms
|
||||||
|
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache-linux" \
|
||||||
|
! -user 1000 -o ! -group 1000 2>/dev/null | \
|
||||||
|
xargs -r chown 1000:1000 2>/dev/null || true
|
||||||
|
|
||||||
|
GIT_HASH=$(cd "$BASE_DIR/data/source" && git rev-parse --short HEAD 2>/dev/null || echo "unknown")
|
||||||
|
notify "WZP Linux x86_64 build started [$GIT_HASH]..."
|
||||||
|
|
||||||
|
echo ">>> Building in Docker..."
|
||||||
|
docker run --rm --user 1000:1000 \
|
||||||
|
-v "$BASE_DIR/data/source:/build/source" \
|
||||||
|
-v "$BASE_DIR/data/cache-linux/cargo-registry:/home/builder/.cargo/registry" \
|
||||||
|
-v "$BASE_DIR/data/cache-linux/cargo-git:/home/builder/.cargo/git" \
|
||||||
|
-v "$BASE_DIR/data/cache-linux/target:/build/source/target" \
|
||||||
|
wzp-android-builder bash -c '
|
||||||
|
set -euo pipefail
|
||||||
|
cd /build/source
|
||||||
|
|
||||||
|
echo ">>> Building relay + client + web + bench..."
|
||||||
|
cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-web --bin wzp-bench 2>&1 | tail -5
|
||||||
|
|
||||||
|
echo ">>> Building audio client..."
|
||||||
|
cargo build --release --bin wzp-client --features audio 2>&1 | tail -3
|
||||||
|
cp target/release/wzp-client target/release/wzp-client-audio
|
||||||
|
cargo build --release --bin wzp-client 2>&1 | tail -3
|
||||||
|
|
||||||
|
echo ">>> Binaries:"
|
||||||
|
ls -lh target/release/wzp-relay target/release/wzp-client target/release/wzp-client-audio target/release/wzp-web target/release/wzp-bench
|
||||||
|
|
||||||
|
echo ">>> Packaging..."
|
||||||
|
tar czf /tmp/wzp-linux-x86_64.tar.gz \
|
||||||
|
-C target/release wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench
|
||||||
|
|
||||||
|
echo "BINARIES_BUILT"
|
||||||
|
'
|
||||||
|
|
||||||
|
# Upload to rustypaste
|
||||||
|
echo ">>> Uploading to rustypaste..."
|
||||||
|
source "$BASE_DIR/.env"
|
||||||
|
TARBALL="$BASE_DIR/data/cache-linux/target/release/../../../wzp-linux-x86_64.tar.gz"
|
||||||
|
# Docker wrote to /tmp inside container, copy from target mount
|
||||||
|
docker run --rm \
|
||||||
|
-v "$BASE_DIR/data/cache-linux/target:/build/target" \
|
||||||
|
wzp-android-builder bash -c \
|
||||||
|
"cp /build/target/release/wzp-relay /build/target/release/wzp-client /build/target/release/wzp-client-audio /build/target/release/wzp-web /build/target/release/wzp-bench /tmp/ && tar czf /tmp/wzp-linux-x86_64.tar.gz -C /tmp wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench && cat /tmp/wzp-linux-x86_64.tar.gz" \
|
||||||
|
> /tmp/wzp-linux-x86_64.tar.gz
|
||||||
|
|
||||||
|
URL=$(curl -s -F "file=@/tmp/wzp-linux-x86_64.tar.gz" -H "Authorization: $rusty_auth_token" "$rusty_address")
|
||||||
|
if [ -n "$URL" ]; then
|
||||||
|
echo "UPLOAD_URL=$URL"
|
||||||
|
notify "WZP Linux x86_64 [$GIT_HASH] ready! $URL"
|
||||||
|
echo ">>> Done! Binaries at: $URL"
|
||||||
|
else
|
||||||
|
notify "WZP Linux build FAILED - upload error"
|
||||||
|
echo "ERROR: upload failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
REMOTE_SCRIPT
|
||||||
|
|
||||||
|
ssh_cmd "chmod +x /tmp/wzp-linux-build.sh"
|
||||||
|
|
||||||
|
# Run in tmux
|
||||||
|
log "Starting Linux build in tmux..."
|
||||||
|
ssh_cmd "tmux kill-session -t wzp-linux 2>/dev/null; true"
|
||||||
|
ssh_cmd "tmux new-session -d -s wzp-linux '/tmp/wzp-linux-build.sh $DO_PULL $DO_CLEAN 2>&1 | tee /tmp/wzp-linux-build.log'"
|
||||||
|
|
||||||
|
log "Build running! Notification on ntfy.sh/wzp when done."
|
||||||
|
echo ""
|
||||||
|
echo " Monitor: ssh $REMOTE_HOST 'tail -f /tmp/wzp-linux-build.log'"
|
||||||
|
echo " Status: ssh $REMOTE_HOST 'tail -5 /tmp/wzp-linux-build.log'"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Optionally wait and download
|
||||||
|
if [ "$DO_INSTALL" = "1" ]; then
|
||||||
|
log "Waiting for build..."
|
||||||
|
while true; do
|
||||||
|
sleep 15
|
||||||
|
if ssh_cmd "grep -q 'UPLOAD_URL\|ERROR' /tmp/wzp-linux-build.log 2>/dev/null"; then
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
URL=$(ssh_cmd "grep UPLOAD_URL /tmp/wzp-linux-build.log | tail -1 | cut -d= -f2")
|
||||||
|
if [ -n "$URL" ]; then
|
||||||
|
log "Downloading binaries..."
|
||||||
|
mkdir -p "$LOCAL_OUTPUT"
|
||||||
|
curl -s -o "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz" "$URL"
|
||||||
|
tar xzf "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz" -C "$LOCAL_OUTPUT/"
|
||||||
|
rm "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz"
|
||||||
|
ls -lh "$LOCAL_OUTPUT"/wzp-*
|
||||||
|
log "Done! Binaries in $LOCAL_OUTPUT/"
|
||||||
|
else
|
||||||
|
err "Build failed"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
122
scripts/build-linux-notify.sh
Executable file
122
scripts/build-linux-notify.sh
Executable file
@@ -0,0 +1,122 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Build WarzonePhone Linux x86_64 binaries via Hetzner Cloud VPS.
|
||||||
|
# Fire and forget — notifies via ntfy.sh/wzp with rustypaste URL.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./scripts/build-linux-notify.sh Full: create VM → build → upload → notify → destroy
|
||||||
|
# ./scripts/build-linux-notify.sh --keep Keep VM after build
|
||||||
|
# ./scripts/build-linux-notify.sh --pull Git pull (for existing VM)
|
||||||
|
|
||||||
|
SSH_KEY_NAME="wz"
|
||||||
|
SSH_KEY_PATH="/Users/manwe/CascadeProjects/wzp"
|
||||||
|
SERVER_TYPE="cx33"
|
||||||
|
IMAGE="debian-12"
|
||||||
|
SERVER_NAME="wzp-linux-builder"
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
LOCAL_OUTPUT="target/linux-x86_64"
|
||||||
|
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
||||||
|
|
||||||
|
SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=15 -o ServerAliveInterval=15 -o LogLevel=ERROR"
|
||||||
|
|
||||||
|
KEEP_VM=0
|
||||||
|
DO_PULL=0
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--keep) KEEP_VM=1 ;;
|
||||||
|
--pull) DO_PULL=1 ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||||
|
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||||
|
|
||||||
|
get_vm_ip() {
|
||||||
|
hcloud server list -o columns=name,ipv4 -o noheader 2>/dev/null | grep "$SERVER_NAME" | awk '{print $2}' | tr -d ' '
|
||||||
|
}
|
||||||
|
|
||||||
|
ssh_cmd() {
|
||||||
|
local ip=$(get_vm_ip)
|
||||||
|
[ -n "$ip" ] || { err "No VM found"; exit 1; }
|
||||||
|
ssh $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip" "$@"
|
||||||
|
}
|
||||||
|
|
||||||
|
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||||
|
|
||||||
|
# --- Create VM if needed ---
|
||||||
|
existing=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||||
|
if [ -z "$existing" ]; then
|
||||||
|
log "Creating Hetzner VM ($SERVER_TYPE, $IMAGE)..."
|
||||||
|
hcloud server create --name "$SERVER_NAME" --type "$SERVER_TYPE" --image "$IMAGE" --ssh-key "$SSH_KEY_NAME" --location fsn1 --quiet
|
||||||
|
|
||||||
|
log "Waiting for SSH..."
|
||||||
|
ip=$(get_vm_ip)
|
||||||
|
for i in $(seq 1 30); do
|
||||||
|
ssh $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip" "echo ok" &>/dev/null && break
|
||||||
|
sleep 2
|
||||||
|
done
|
||||||
|
|
||||||
|
log "Installing deps..."
|
||||||
|
ssh_cmd "apt-get update -qq && apt-get install -y -qq build-essential cmake pkg-config libasound2-dev libssl-dev curl git > /dev/null 2>&1"
|
||||||
|
ssh_cmd "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable > /dev/null 2>&1"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Upload source ---
|
||||||
|
log "Uploading source..."
|
||||||
|
ip=$(get_vm_ip)
|
||||||
|
rsync -az --delete \
|
||||||
|
--exclude='target' --exclude='.git' --exclude='.claude' \
|
||||||
|
--exclude='node_modules' --exclude='dist' --exclude='android/app/build' \
|
||||||
|
-e "ssh $SSH_OPTS -i $SSH_KEY_PATH" \
|
||||||
|
"$PROJECT_DIR/" "root@$ip:/root/wzp-build/"
|
||||||
|
|
||||||
|
# --- Build ---
|
||||||
|
log "Building all binaries..."
|
||||||
|
notify "WZP Linux build started..."
|
||||||
|
|
||||||
|
ssh_cmd "source ~/.cargo/env && cd /root/wzp-build && \
|
||||||
|
cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-web --bin wzp-bench 2>&1 | tail -5 && \
|
||||||
|
echo '--- audio client ---' && \
|
||||||
|
cargo build --release --bin wzp-client --features audio 2>&1 | tail -3 && \
|
||||||
|
cp target/release/wzp-client target/release/wzp-client-audio && \
|
||||||
|
cargo build --release --bin wzp-client 2>&1 | tail -3 && \
|
||||||
|
echo 'BUILD_DONE' && \
|
||||||
|
ls -lh target/release/wzp-relay target/release/wzp-client target/release/wzp-client-audio target/release/wzp-web target/release/wzp-bench"
|
||||||
|
|
||||||
|
# --- Package + upload to rustypaste ---
|
||||||
|
log "Packaging and uploading..."
|
||||||
|
UPLOAD_URL=$(ssh_cmd "cd /root/wzp-build && \
|
||||||
|
tar czf /tmp/wzp-linux-x86_64.tar.gz \
|
||||||
|
-C target/release wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench \
|
||||||
|
-C /root/wzp-build/crates/wzp-web/static index.html audio-processor.js 2>/dev/null && \
|
||||||
|
curl -s -F 'file=@/tmp/wzp-linux-x86_64.tar.gz' \
|
||||||
|
-H 'Authorization: DAxAAGghkn1WKv1+RpPKkg==' \
|
||||||
|
https://paste.dk.manko.yoga")
|
||||||
|
|
||||||
|
if [ -n "$UPLOAD_URL" ]; then
|
||||||
|
notify "WZP Linux binaries ready! $UPLOAD_URL"
|
||||||
|
log "Uploaded: $UPLOAD_URL"
|
||||||
|
else
|
||||||
|
notify "WZP Linux build FAILED"
|
||||||
|
err "Upload failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Transfer locally ---
|
||||||
|
log "Downloading binaries..."
|
||||||
|
mkdir -p "$LOCAL_OUTPUT"
|
||||||
|
for bin in wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench; do
|
||||||
|
scp $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip:/root/wzp-build/target/release/$bin" "$LOCAL_OUTPUT/$bin" 2>/dev/null
|
||||||
|
done
|
||||||
|
ls -lh "$LOCAL_OUTPUT"/wzp-*
|
||||||
|
|
||||||
|
# --- Cleanup ---
|
||||||
|
if [ "$KEEP_VM" = "1" ]; then
|
||||||
|
log "VM kept alive. Destroy: hcloud server delete $SERVER_NAME"
|
||||||
|
else
|
||||||
|
log "Destroying VM..."
|
||||||
|
hcloud server delete "$SERVER_NAME"
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "Done!"
|
||||||
|
echo " Deploy: scp $LOCAL_OUTPUT/wzp-relay user@server:~/wzp/"
|
||||||
253
scripts/build-tauri-android.sh
Executable file
253
scripts/build-tauri-android.sh
Executable file
@@ -0,0 +1,253 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# WZ Phone — Tauri 2.x Mobile Android APK build
|
||||||
|
#
|
||||||
|
# Builds the desktop/ Tauri app as an Android APK via cargo-tauri inside the
|
||||||
|
# wzp-android-builder Docker image on SepehrHomeserverdk. Uploads the APK to
|
||||||
|
# rustypaste, fires ntfy.sh/wzp notifications at start + finish, and SCPs the
|
||||||
|
# APK back locally.
|
||||||
|
#
|
||||||
|
# Same pattern as build-and-notify.sh but for the Tauri mobile pipeline:
|
||||||
|
# - Source: desktop/src-tauri/ (not android/)
|
||||||
|
# - Build: cargo tauri android build (not gradlew assembleDebug)
|
||||||
|
# - Output: desktop/src-tauri/gen/android/.../*.apk
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./scripts/build-tauri-android.sh # full pipeline (debug)
|
||||||
|
# ./scripts/build-tauri-android.sh --release # release APK
|
||||||
|
# ./scripts/build-tauri-android.sh --no-pull # skip git fetch
|
||||||
|
# ./scripts/build-tauri-android.sh --rust # force-clean rust target
|
||||||
|
# ./scripts/build-tauri-android.sh --init # also run `cargo tauri android init`
|
||||||
|
#
|
||||||
|
# Environment:
|
||||||
|
# WZP_BRANCH Branch to build (default: feat/desktop-audio-rewrite)
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
REMOTE_HOST="SepehrHomeserverdk"
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
LOCAL_OUTPUT="target/tauri-android-apk"
|
||||||
|
BRANCH="${WZP_BRANCH:-feat/desktop-audio-rewrite}"
|
||||||
|
SSH_OPTS="-o ConnectTimeout=15 -o ServerAliveInterval=15 -o ServerAliveCountMax=4 -o LogLevel=ERROR"
|
||||||
|
|
||||||
|
REBUILD_RUST=0
|
||||||
|
DO_PULL=1
|
||||||
|
DO_INIT=0
|
||||||
|
BUILD_RELEASE=0
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--rust) REBUILD_RUST=1 ;;
|
||||||
|
--pull) DO_PULL=1 ;;
|
||||||
|
--no-pull) DO_PULL=0 ;;
|
||||||
|
--init) DO_INIT=1 ;;
|
||||||
|
--release) BUILD_RELEASE=1 ;;
|
||||||
|
-h|--help)
|
||||||
|
sed -n '3,30p' "$0"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||||
|
ssh_cmd() { ssh -A $SSH_OPTS "$REMOTE_HOST" "$@"; }
|
||||||
|
|
||||||
|
notify_local() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||||
|
|
||||||
|
mkdir -p "$LOCAL_OUTPUT"
|
||||||
|
|
||||||
|
log "Uploading remote build script..."
|
||||||
|
ssh_cmd "cat > /tmp/wzp-tauri-build.sh" <<'REMOTE_SCRIPT'
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
BASE_DIR="/mnt/storage/manBuilder"
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
BRANCH="${1:-feat/desktop-audio-rewrite}"
|
||||||
|
DO_PULL="${2:-1}"
|
||||||
|
REBUILD_RUST="${3:-0}"
|
||||||
|
DO_INIT="${4:-0}"
|
||||||
|
BUILD_RELEASE="${5:-0}"
|
||||||
|
|
||||||
|
LOG_FILE=/tmp/wzp-tauri-build.log
|
||||||
|
GIT_HASH="unknown" # populated after fetch
|
||||||
|
ENV_FILE="$BASE_DIR/.env"
|
||||||
|
|
||||||
|
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||||
|
|
||||||
|
# Upload a file to rustypaste; print URL on stdout (or empty on failure).
|
||||||
|
upload_to_rustypaste() {
|
||||||
|
local file="$1"
|
||||||
|
[ ! -f "$ENV_FILE" ] && { echo ""; return; }
|
||||||
|
# shellcheck disable=SC1090
|
||||||
|
source "$ENV_FILE"
|
||||||
|
if [ -n "${rusty_address:-}" ] && [ -n "${rusty_auth_token:-}" ]; then
|
||||||
|
curl -s -F "file=@$file" -H "Authorization: $rusty_auth_token" "$rusty_address" || echo ""
|
||||||
|
else
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# On failure: upload the build log to rustypaste, then notify with hash + url.
|
||||||
|
on_error() {
|
||||||
|
local line="$1"
|
||||||
|
local log_url
|
||||||
|
log_url=$(upload_to_rustypaste "$LOG_FILE" || echo "")
|
||||||
|
if [ -n "$log_url" ]; then
|
||||||
|
notify "WZP Tauri Android build FAILED [$GIT_HASH] (line $line)
|
||||||
|
log: $log_url"
|
||||||
|
else
|
||||||
|
notify "WZP Tauri Android build FAILED [$GIT_HASH] (line $line) — log upload failed, see $LOG_FILE on remote"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
trap 'on_error $LINENO' ERR
|
||||||
|
|
||||||
|
exec > >(tee "$LOG_FILE") 2>&1
|
||||||
|
|
||||||
|
if [ "$DO_PULL" = "1" ]; then
|
||||||
|
echo ">>> git fetch + reset $BRANCH"
|
||||||
|
cd "$BASE_DIR/data/source"
|
||||||
|
git reset --hard HEAD 2>/dev/null || true
|
||||||
|
# NOTE: deliberately do NOT run `git clean -fd` here. It would wipe the
|
||||||
|
# tauri-generated `desktop/src-tauri/gen/android/` scaffold (gradlew,
|
||||||
|
# settings.gradle, etc.) which is expensive to recreate and breaks
|
||||||
|
# subsequent builds with "gradlew not found".
|
||||||
|
git gc --prune=now 2>/dev/null || true
|
||||||
|
git fetch origin "$BRANCH" 2>&1 | tail -3
|
||||||
|
git checkout "$BRANCH" 2>/dev/null || git checkout -b "$BRANCH" "origin/$BRANCH"
|
||||||
|
git reset --hard "origin/$BRANCH"
|
||||||
|
git submodule update --init || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
GIT_HASH=$(cd "$BASE_DIR/data/source" && git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
||||||
|
GIT_MSG=$(cd "$BASE_DIR/data/source" && git log -1 --pretty=%s 2>/dev/null | head -c 60 || echo "?")
|
||||||
|
notify "WZP Tauri Android build STARTED [$GIT_HASH] — $GIT_MSG"
|
||||||
|
|
||||||
|
# Fix perms so uid 1000 can write
|
||||||
|
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache" \
|
||||||
|
! -user 1000 -o ! -group 1000 2>/dev/null | \
|
||||||
|
xargs -r chown 1000:1000 2>/dev/null || true
|
||||||
|
|
||||||
|
# Optionally clean rust target for android triples
|
||||||
|
if [ "$REBUILD_RUST" = "1" ]; then
|
||||||
|
echo ">>> Cleaning Rust android target dirs..."
|
||||||
|
rm -rf "$BASE_DIR/data/cache/target/aarch64-linux-android" \
|
||||||
|
"$BASE_DIR/data/cache/target/armv7-linux-androideabi" \
|
||||||
|
"$BASE_DIR/data/cache/target/i686-linux-android" \
|
||||||
|
"$BASE_DIR/data/cache/target/x86_64-linux-android"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Profile flag
|
||||||
|
PROFILE_FLAG="--debug"
|
||||||
|
[ "$BUILD_RELEASE" = "1" ] && PROFILE_FLAG=""
|
||||||
|
|
||||||
|
# Persist ~/.android (where the auto-generated debug.keystore lives) so every
|
||||||
|
# build is signed with the SAME key. Without this, every fresh container gets
|
||||||
|
# a new debug keystore and `adb install -r` fails with INSTALL_FAILED_UPDATE_
|
||||||
|
# INCOMPATIBLE because the signature changed.
|
||||||
|
mkdir -p "$BASE_DIR/data/cache/android-home"
|
||||||
|
chown 1000:1000 "$BASE_DIR/data/cache/android-home" 2>/dev/null || true
|
||||||
|
|
||||||
|
docker run --rm \
|
||||||
|
--user 1000:1000 \
|
||||||
|
-e DO_INIT="$DO_INIT" \
|
||||||
|
-e PROFILE_FLAG="$PROFILE_FLAG" \
|
||||||
|
-v "$BASE_DIR/data/source:/build/source" \
|
||||||
|
-v "$BASE_DIR/data/cache/cargo-registry:/home/builder/.cargo/registry" \
|
||||||
|
-v "$BASE_DIR/data/cache/cargo-git:/home/builder/.cargo/git" \
|
||||||
|
-v "$BASE_DIR/data/cache/target:/build/source/target" \
|
||||||
|
-v "$BASE_DIR/data/cache/gradle:/home/builder/.gradle" \
|
||||||
|
-v "$BASE_DIR/data/cache/android-home:/home/builder/.android" \
|
||||||
|
wzp-android-builder \
|
||||||
|
bash -c '
|
||||||
|
set -euo pipefail
|
||||||
|
cd /build/source/desktop
|
||||||
|
|
||||||
|
echo ">>> npm install"
|
||||||
|
npm install --silent 2>&1 | tail -5 || npm install 2>&1 | tail -20
|
||||||
|
|
||||||
|
cd src-tauri
|
||||||
|
|
||||||
|
# Run init if forced, OR if the gradle wrapper is missing. Just checking
|
||||||
|
# for `gen/android` is not enough — Tauri creates a few subdirectories
|
||||||
|
# during build (app/, buildSrc/, .gradle/) that survive a partial wipe and
|
||||||
|
# would make a naive `[ ! -d gen/android ]` check return false even though
|
||||||
|
# the build wrapper itself is gone.
|
||||||
|
if [ "${DO_INIT}" = "1" ] || [ ! -x gen/android/gradlew ]; then
|
||||||
|
echo ">>> cargo tauri android init"
|
||||||
|
cargo tauri android init 2>&1 | tail -20
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── wzp-native standalone cdylib (built with cargo-ndk, not cargo-tauri) ──
|
||||||
|
# Produces libwzp_native.so which wzp-desktop dlopens at runtime via
|
||||||
|
# libloading. Split exists because cargo-tauri`s linker wiring pulls
|
||||||
|
# bionic private symbols into any cdylib with cc::Build C++, causing
|
||||||
|
# __init_tcb+4 SIGSEGV. cargo-ndk uses the same linker path as the
|
||||||
|
# legacy wzp-android crate which works.
|
||||||
|
echo ">>> cargo ndk build -p wzp-native --release"
|
||||||
|
JNI_ABI_DIR=gen/android/app/src/main/jniLibs/arm64-v8a
|
||||||
|
mkdir -p "$JNI_ABI_DIR"
|
||||||
|
(
|
||||||
|
cd /build/source
|
||||||
|
cargo ndk -t arm64-v8a -o desktop/src-tauri/gen/android/app/src/main/jniLibs \
|
||||||
|
build --release -p wzp-native 2>&1 | tail -10
|
||||||
|
)
|
||||||
|
if [ -f "$JNI_ABI_DIR/libwzp_native.so" ]; then
|
||||||
|
ls -lh "$JNI_ABI_DIR/libwzp_native.so"
|
||||||
|
else
|
||||||
|
echo ">>> WARNING: libwzp_native.so not produced"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ">>> cargo tauri android build ${PROFILE_FLAG} --target aarch64 --apk"
|
||||||
|
cargo tauri android build ${PROFILE_FLAG} --target aarch64 --apk
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo ">>> Build artifacts:"
|
||||||
|
find gen/android -name "*.apk" -exec ls -lh {} \; 2>/dev/null
|
||||||
|
'
|
||||||
|
|
||||||
|
# Locate the produced APK
|
||||||
|
APK=$(find "$BASE_DIR/data/source/desktop/src-tauri/gen/android" -name "*.apk" -type f 2>/dev/null | head -1)
|
||||||
|
if [ -z "$APK" ] || [ ! -f "$APK" ]; then
|
||||||
|
LOG_URL=$(upload_to_rustypaste "$LOG_FILE" || echo "")
|
||||||
|
if [ -n "$LOG_URL" ]; then
|
||||||
|
notify "WZP Tauri Android build [$GIT_HASH]: no APK produced
|
||||||
|
log: $LOG_URL"
|
||||||
|
else
|
||||||
|
notify "WZP Tauri Android build [$GIT_HASH]: no APK produced — log upload failed"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
APK_SIZE=$(du -h "$APK" | cut -f1)
|
||||||
|
|
||||||
|
RUSTY_URL=$(upload_to_rustypaste "$APK" || echo "")
|
||||||
|
if [ -n "$RUSTY_URL" ]; then
|
||||||
|
notify "WZP Tauri Android build OK [$GIT_HASH] ($APK_SIZE)
|
||||||
|
$RUSTY_URL"
|
||||||
|
else
|
||||||
|
notify "WZP Tauri Android build OK [$GIT_HASH] ($APK_SIZE) — rustypaste upload skipped"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Print path so the local script can grab it
|
||||||
|
echo "APK_REMOTE_PATH=$APK"
|
||||||
|
REMOTE_SCRIPT
|
||||||
|
|
||||||
|
ssh_cmd "chmod +x /tmp/wzp-tauri-build.sh"
|
||||||
|
|
||||||
|
notify_local "WZP Tauri Android build dispatched (branch=$BRANCH, release=$BUILD_RELEASE)"
|
||||||
|
log "Triggering remote build (branch=$BRANCH)..."
|
||||||
|
|
||||||
|
# Run; capture full output, last line is APK_REMOTE_PATH=...
|
||||||
|
REMOTE_OUTPUT=$(ssh_cmd "/tmp/wzp-tauri-build.sh '$BRANCH' '$DO_PULL' '$REBUILD_RUST' '$DO_INIT' '$BUILD_RELEASE'" || true)
|
||||||
|
echo "$REMOTE_OUTPUT" | tail -60
|
||||||
|
|
||||||
|
APK_REMOTE=$(echo "$REMOTE_OUTPUT" | grep '^APK_REMOTE_PATH=' | tail -1 | cut -d= -f2-)
|
||||||
|
if [ -n "$APK_REMOTE" ]; then
|
||||||
|
log "Downloading APK to $LOCAL_OUTPUT/wzp-tauri.apk..."
|
||||||
|
scp $SSH_OPTS "$REMOTE_HOST:$APK_REMOTE" "$LOCAL_OUTPUT/wzp-tauri.apk"
|
||||||
|
echo " $LOCAL_OUTPUT/wzp-tauri.apk ($(du -h "$LOCAL_OUTPUT/wzp-tauri.apk" | cut -f1))"
|
||||||
|
else
|
||||||
|
log "No APK produced — see ntfy / remote log /tmp/wzp-tauri-build.log"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
280
scripts/federation-test.sh
Executable file
280
scripts/federation-test.sh
Executable file
@@ -0,0 +1,280 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Federation Test Harness
|
||||||
|
# Tests presence, audio delivery, and reconnection across 3 relays.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./scripts/federation-test.sh <relay1> <relay2> <relay3>
|
||||||
|
# ./scripts/federation-test.sh 172.16.81.175:4434 172.16.81.175:4435 172.16.81.175:4436
|
||||||
|
#
|
||||||
|
# Requires: wzp-client binary in PATH or target/release/
|
||||||
|
|
||||||
|
RELAY1="${1:-127.0.0.1:4433}"
|
||||||
|
RELAY2="${2:-127.0.0.1:4434}"
|
||||||
|
RELAY3="${3:-127.0.0.1:4435}"
|
||||||
|
ROOM="general"
|
||||||
|
CLIENT="${WZP_CLIENT:-target/release/wzp-client}"
|
||||||
|
AUDIO="/tmp/test-audio-60s.raw"
|
||||||
|
RESULTS="/tmp/federation-test-results"
|
||||||
|
DURATION=15 # seconds per test phase
|
||||||
|
|
||||||
|
# Fixed seeds for reproducible identities
|
||||||
|
SEED_A="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
||||||
|
SEED_B="bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
|
||||||
|
SEED_C="cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc"
|
||||||
|
SEED_D="dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd"
|
||||||
|
|
||||||
|
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||||
|
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||||
|
pass() { echo -e "\033[1;32m PASS: $*\033[0m"; }
|
||||||
|
fail() { echo -e "\033[1;31m FAIL: $*\033[0m"; }
|
||||||
|
|
||||||
|
analyze() {
|
||||||
|
local path="$1" label="$2"
|
||||||
|
if [ ! -f "$path" ] || [ ! -s "$path" ]; then
|
||||||
|
fail "$label: NO FILE or empty"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
python3 -c "
|
||||||
|
import struct, math
|
||||||
|
with open('$path', 'rb') as f: data = f.read()
|
||||||
|
if len(data) < 4:
|
||||||
|
print(' $label: EMPTY')
|
||||||
|
exit(1)
|
||||||
|
samples = struct.unpack(f'<{len(data)//2}h', data)
|
||||||
|
n = len(samples)
|
||||||
|
rms = math.sqrt(sum(s*s for s in samples) / n) if n > 0 else 0
|
||||||
|
dur = n / 48000
|
||||||
|
nonzero = sum(1 for s in samples if s != 0)
|
||||||
|
pct = 100 * nonzero / n if n > 0 else 0
|
||||||
|
if rms > 50 and pct > 5:
|
||||||
|
print(f' \033[32mPASS\033[0m: $label — {dur:.1f}s, RMS {rms:.0f}, {pct:.0f}% nonzero')
|
||||||
|
else:
|
||||||
|
print(f' \033[31mFAIL\033[0m: $label — {dur:.1f}s, RMS {rms:.0f}, {pct:.0f}% nonzero')
|
||||||
|
exit(1)
|
||||||
|
" 2>/dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
cleanup() {
|
||||||
|
log "Cleaning up..."
|
||||||
|
kill ${PIDS[@]} 2>/dev/null || true
|
||||||
|
wait 2>/dev/null || true
|
||||||
|
}
|
||||||
|
trap cleanup EXIT
|
||||||
|
|
||||||
|
mkdir -p "$RESULTS"
|
||||||
|
PIDS=()
|
||||||
|
|
||||||
|
# Generate test audio if missing
|
||||||
|
if [ ! -f "$AUDIO" ]; then
|
||||||
|
log "Generating test audio..."
|
||||||
|
python3 -c "
|
||||||
|
import struct, math, random
|
||||||
|
RATE = 48000; samples = []
|
||||||
|
t = 0
|
||||||
|
while t < 60 * RATE:
|
||||||
|
burst = random.randint(int(RATE*0.2), int(RATE*0.8))
|
||||||
|
freq = random.choice([220,330,440,550,660,880])
|
||||||
|
amp = random.uniform(8000,16000)
|
||||||
|
for i in range(min(burst, 60*RATE-t)):
|
||||||
|
s = amp * math.sin(2*math.pi*freq*(t+i)/RATE)
|
||||||
|
samples.append(int(max(-32767,min(32767,s))))
|
||||||
|
t += burst
|
||||||
|
sil = random.randint(int(RATE*0.1), int(RATE*0.5))
|
||||||
|
samples.extend([0]*min(sil, 60*RATE-t)); t += sil
|
||||||
|
with open('$AUDIO', 'wb') as f:
|
||||||
|
f.write(struct.pack(f'<{len(samples)}h', *samples))
|
||||||
|
print(f'Generated {len(samples)/RATE:.1f}s')
|
||||||
|
"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "╔══════════════════════════════════════════════════════════╗"
|
||||||
|
echo "║ WarzonePhone Federation Test Suite ║"
|
||||||
|
echo "╠══════════════════════════════════════════════════════════╣"
|
||||||
|
echo "║ Relay 1: $RELAY1"
|
||||||
|
echo "║ Relay 2: $RELAY2"
|
||||||
|
echo "║ Relay 3: $RELAY3"
|
||||||
|
echo "║ Room: $ROOM"
|
||||||
|
echo "║ Duration: ${DURATION}s per phase"
|
||||||
|
echo "╚══════════════════════════════════════════════════════════╝"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# TEST 1: Basic 2-relay audio
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
log "TEST 1: Basic audio — A sends on Relay1, B records on Relay2"
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_B --record "$RESULTS/t1_b.raw" "$RELAY2" &
|
||||||
|
PIDS+=($!); sleep 2
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_A --send-tone $DURATION "$RELAY1" &
|
||||||
|
PIDS+=($!); sleep $((DURATION + 3))
|
||||||
|
|
||||||
|
kill -INT ${PIDS[-2]} 2>/dev/null; sleep 3; kill -INT ${PIDS[-1]} 2>/dev/null; wait ${PIDS[-1]} ${PIDS[-2]} 2>/dev/null || true
|
||||||
|
PIDS=("${PIDS[@]:0:${#PIDS[@]}-2}")
|
||||||
|
|
||||||
|
analyze "$RESULTS/t1_b.raw" "Relay1→Relay2 audio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# TEST 2: Reverse direction
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
log "TEST 2: Reverse — B sends on Relay2, A records on Relay1"
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_A --record "$RESULTS/t2_a.raw" "$RELAY1" &
|
||||||
|
PIDS+=($!); sleep 2
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_B --send-tone $DURATION "$RELAY2" &
|
||||||
|
PIDS+=($!); sleep $((DURATION + 3))
|
||||||
|
|
||||||
|
kill -INT ${PIDS[-2]} 2>/dev/null; sleep 3; kill -INT ${PIDS[-1]} 2>/dev/null; wait ${PIDS[-1]} ${PIDS[-2]} 2>/dev/null || true
|
||||||
|
PIDS=("${PIDS[@]:0:${#PIDS[@]}-2}")
|
||||||
|
|
||||||
|
analyze "$RESULTS/t2_a.raw" "Relay2→Relay1 audio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# TEST 3: 3-relay chain
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
log "TEST 3: 3-relay chain — A sends on Relay1, C records on Relay3"
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_C --record "$RESULTS/t3_c.raw" "$RELAY3" &
|
||||||
|
PIDS+=($!); sleep 2
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_A --send-tone $DURATION "$RELAY1" &
|
||||||
|
PIDS+=($!); sleep $((DURATION + 3))
|
||||||
|
|
||||||
|
kill -INT ${PIDS[-2]} 2>/dev/null; sleep 3; kill -INT ${PIDS[-1]} 2>/dev/null; wait ${PIDS[-1]} ${PIDS[-2]} 2>/dev/null || true
|
||||||
|
PIDS=("${PIDS[@]:0:${#PIDS[@]}-2}")
|
||||||
|
|
||||||
|
analyze "$RESULTS/t3_c.raw" "Relay1→Relay3 (via Relay2) audio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# TEST 4: File playback (simulated talk show)
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
log "TEST 4: File playback — A plays audio file on Relay1, B records on Relay2"
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_B --record "$RESULTS/t4_b.raw" "$RELAY2" &
|
||||||
|
PIDS+=($!); sleep 2
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_A --send-file "$AUDIO" "$RELAY1" &
|
||||||
|
PIDS+=($!); sleep 20 # file is 60s but we only wait 20
|
||||||
|
|
||||||
|
kill -INT ${PIDS[-2]} 2>/dev/null; sleep 3; kill -INT ${PIDS[-1]} 2>/dev/null; wait ${PIDS[-1]} ${PIDS[-2]} 2>/dev/null || true
|
||||||
|
PIDS=("${PIDS[@]:0:${#PIDS[@]}-2}")
|
||||||
|
|
||||||
|
analyze "$RESULTS/t4_b.raw" "File playback Relay1→Relay2"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# TEST 5: Reconnection — B disconnects and rejoins
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
log "TEST 5: Reconnection — A sends, B joins/leaves/rejoins on Relay2"
|
||||||
|
|
||||||
|
# A sends continuously
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_A --send-tone 30 "$RELAY1" &
|
||||||
|
A_PID=$!; PIDS+=($A_PID)
|
||||||
|
sleep 2
|
||||||
|
|
||||||
|
# B joins and records for 5s
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_B --record "$RESULTS/t5_b_first.raw" "$RELAY2" &
|
||||||
|
B_PID=$!; PIDS+=($B_PID)
|
||||||
|
sleep 5
|
||||||
|
kill -INT $B_PID 2>/dev/null; wait $B_PID 2>/dev/null || true
|
||||||
|
|
||||||
|
log " B disconnected, waiting 3s..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
# B rejoins and records for 5s
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_B --record "$RESULTS/t5_b_rejoin.raw" "$RELAY2" &
|
||||||
|
B_PID=$!; PIDS+=($B_PID)
|
||||||
|
sleep 8
|
||||||
|
kill -INT $B_PID 2>/dev/null; wait $B_PID 2>/dev/null || true
|
||||||
|
kill -INT $A_PID 2>/dev/null; wait $A_PID 2>/dev/null || true
|
||||||
|
|
||||||
|
analyze "$RESULTS/t5_b_first.raw" "B first join (before disconnect)"
|
||||||
|
analyze "$RESULTS/t5_b_rejoin.raw" "B rejoin (after disconnect)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# TEST 6: Multi-participant — 3 users on 3 relays
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
log "TEST 6: Multi-participant — A sends on R1, B records on R2, C records on R3"
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_B --record "$RESULTS/t6_b.raw" "$RELAY2" &
|
||||||
|
PIDS+=($!); sleep 1
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_C --record "$RESULTS/t6_c.raw" "$RELAY3" &
|
||||||
|
PIDS+=($!); sleep 1
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_A --send-tone $DURATION "$RELAY1" &
|
||||||
|
PIDS+=($!); sleep $((DURATION + 3))
|
||||||
|
|
||||||
|
# Kill all 3
|
||||||
|
for i in 1 2 3; do
|
||||||
|
kill -INT ${PIDS[-$i]} 2>/dev/null || true
|
||||||
|
done
|
||||||
|
wait 2>/dev/null || true
|
||||||
|
PIDS=()
|
||||||
|
|
||||||
|
analyze "$RESULTS/t6_b.raw" "B on Relay2 hears A on Relay1"
|
||||||
|
analyze "$RESULTS/t6_c.raw" "C on Relay3 hears A on Relay1"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# TEST 7: Simultaneous senders
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
log "TEST 7: Simultaneous — A sends 440Hz on R1, B sends 880Hz on R2, C records on R3"
|
||||||
|
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_C --record "$RESULTS/t7_c.raw" "$RELAY3" &
|
||||||
|
PIDS+=($!); sleep 2
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_A --send-tone $DURATION "$RELAY1" &
|
||||||
|
PIDS+=($!);
|
||||||
|
RUST_LOG=error $CLIENT --room $ROOM --seed $SEED_B --send-tone $DURATION "$RELAY2" &
|
||||||
|
PIDS+=($!); sleep $((DURATION + 3))
|
||||||
|
|
||||||
|
for i in 1 2 3; do kill ${PIDS[-$i]} 2>/dev/null || true; done
|
||||||
|
wait 2>/dev/null || true
|
||||||
|
PIDS=()
|
||||||
|
|
||||||
|
analyze "$RESULTS/t7_c.raw" "C hears both A(R1) + B(R2)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
# SUMMARY
|
||||||
|
# ═══════════════════════════════════════════════════════════════
|
||||||
|
echo ""
|
||||||
|
echo "╔══════════════════════════════════════════════════════════╗"
|
||||||
|
echo "║ TEST SUMMARY ║"
|
||||||
|
echo "╠══════════════════════════════════════════════════════════╣"
|
||||||
|
|
||||||
|
PASS=0; FAIL=0
|
||||||
|
for f in "$RESULTS"/t*.raw; do
|
||||||
|
label=$(basename "$f" .raw)
|
||||||
|
if [ -s "$f" ]; then
|
||||||
|
rms=$(python3 -c "
|
||||||
|
import struct, math
|
||||||
|
with open('$f','rb') as f: d=f.read()
|
||||||
|
s=struct.unpack(f'<{len(d)//2}h',d)
|
||||||
|
print(f'{math.sqrt(sum(x*x for x in s)/len(s)):.0f}')
|
||||||
|
" 2>/dev/null || echo "0")
|
||||||
|
if [ "$rms" -gt 50 ] 2>/dev/null; then
|
||||||
|
echo "║ ✓ $label (RMS: $rms)"
|
||||||
|
PASS=$((PASS + 1))
|
||||||
|
else
|
||||||
|
echo "║ ✗ $label (RMS: $rms)"
|
||||||
|
FAIL=$((FAIL + 1))
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "║ ✗ $label (NO FILE)"
|
||||||
|
FAIL=$((FAIL + 1))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "╠══════════════════════════════════════════════════════════╣"
|
||||||
|
echo "║ PASSED: $PASS FAILED: $FAIL"
|
||||||
|
echo "╚══════════════════════════════════════════════════════════╝"
|
||||||
|
echo ""
|
||||||
|
echo "Recordings saved to: $RESULTS/"
|
||||||
|
echo "Play with: ffplay -f s16le -ar 48000 -ac 1 $RESULTS/<file>.raw"
|
||||||
72
scripts/mint-tmux.sh
Executable file
72
scripts/mint-tmux.sh
Executable file
@@ -0,0 +1,72 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# =============================================================================
|
||||||
|
# mint-tmux.sh — run a command inside a persistent tmux session on the
|
||||||
|
# Linux Mint build box so the user can attach and watch/interact at any time.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# mint-tmux.sh run <window-name> <command...> # start a new tmux window
|
||||||
|
# mint-tmux.sh send <window-name> <text...> # send keys to a window
|
||||||
|
# mint-tmux.sh kill <window-name> # close a window
|
||||||
|
# mint-tmux.sh list # list windows
|
||||||
|
# mint-tmux.sh tail <window-name> # dump last 200 lines
|
||||||
|
#
|
||||||
|
# Session name is always "wzp". Attach manually with:
|
||||||
|
# ssh -t root@172.16.81.192 tmux attach -t wzp
|
||||||
|
#
|
||||||
|
# If the wzp session doesn't exist yet, it's created automatically.
|
||||||
|
# =============================================================================
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
HOST="root@172.16.81.192"
|
||||||
|
SESSION="wzp"
|
||||||
|
SSH_OPTS="-o ConnectTimeout=10 -o LogLevel=ERROR"
|
||||||
|
|
||||||
|
ensure_session() {
|
||||||
|
ssh $SSH_OPTS "$HOST" "
|
||||||
|
tmux has-session -t $SESSION 2>/dev/null || tmux new-session -d -s $SESSION -n home 'bash -l'
|
||||||
|
"
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd="${1:-list}"
|
||||||
|
shift || true
|
||||||
|
|
||||||
|
case "$cmd" in
|
||||||
|
run)
|
||||||
|
WIN="${1:?window name required}"; shift
|
||||||
|
ensure_session
|
||||||
|
# Use a heredoc so multi-arg commands don't need escaping
|
||||||
|
CMD="$*"
|
||||||
|
ssh $SSH_OPTS "$HOST" bash -s <<REMOTE
|
||||||
|
if tmux list-windows -t $SESSION -F '#W' 2>/dev/null | grep -qx '$WIN'; then
|
||||||
|
tmux kill-window -t $SESSION:$WIN 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
tmux new-window -t $SESSION -n '$WIN' "bash -l -c '$CMD; echo; echo --- window $WIN exited with code \\\$?; exec bash -l'"
|
||||||
|
REMOTE
|
||||||
|
echo "Started '$WIN' in tmux session $SESSION on $HOST"
|
||||||
|
echo "Attach: ssh -t $HOST tmux attach -t $SESSION"
|
||||||
|
;;
|
||||||
|
send)
|
||||||
|
WIN="${1:?window name required}"; shift
|
||||||
|
TEXT="$*"
|
||||||
|
ssh $SSH_OPTS "$HOST" "tmux send-keys -t $SESSION:$WIN '$TEXT' C-m"
|
||||||
|
;;
|
||||||
|
kill)
|
||||||
|
WIN="${1:?window name required}"
|
||||||
|
ssh $SSH_OPTS "$HOST" "tmux kill-window -t $SESSION:$WIN 2>/dev/null || true"
|
||||||
|
;;
|
||||||
|
list)
|
||||||
|
ensure_session
|
||||||
|
ssh $SSH_OPTS "$HOST" "tmux list-windows -t $SESSION"
|
||||||
|
;;
|
||||||
|
tail)
|
||||||
|
WIN="${1:?window name required}"
|
||||||
|
ssh $SSH_OPTS "$HOST" "tmux capture-pane -p -t $SESSION:$WIN -S -200 || echo 'no such window'"
|
||||||
|
;;
|
||||||
|
attach)
|
||||||
|
exec ssh -t $SSH_OPTS "$HOST" tmux attach -t $SESSION
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
sed -n '3,20p' "$0"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
167
scripts/prep-linux-mint.sh
Executable file
167
scripts/prep-linux-mint.sh
Executable file
@@ -0,0 +1,167 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# =============================================================================
|
||||||
|
# Prepare a Linux Mint / Debian / Ubuntu x86_64 host as a full WarzonePhone
|
||||||
|
# Android build environment. Installs everything the docker wzp-android-builder
|
||||||
|
# image has, but directly on the host — so we can iterate locally without
|
||||||
|
# docker layer caching, see real linker output, run gdbserver, etc.
|
||||||
|
#
|
||||||
|
# Target host: root@172.16.81.192 (Linux Mint on the LAN)
|
||||||
|
#
|
||||||
|
# Usage (from the macOS workstation):
|
||||||
|
# scp scripts/prep-linux-mint.sh root@172.16.81.192:/tmp/
|
||||||
|
# ssh root@172.16.81.192 'nohup bash /tmp/prep-linux-mint.sh > /var/log/wzp-prep.log 2>&1 &'
|
||||||
|
#
|
||||||
|
# The script is idempotent: safe to re-run if a step fails. Each stage tests
|
||||||
|
# for its target before doing work. Progress + completion is pinged to
|
||||||
|
# ntfy.sh/wzp so we can track it from the phone.
|
||||||
|
#
|
||||||
|
# On success the host has:
|
||||||
|
# - JDK 17
|
||||||
|
# - Android SDK (cmdline-tools + platforms 34/36, build-tools 34/35, NDK 26.1)
|
||||||
|
# - Node.js 20 LTS + npm
|
||||||
|
# - Rust stable + aarch64/armv7/i686/x86_64 android targets
|
||||||
|
# - cargo-ndk + cargo tauri-cli 2.x
|
||||||
|
# - /opt/wzp/warzonePhone (cloned workspace checkout on feat/desktop-audio-rewrite)
|
||||||
|
#
|
||||||
|
# Everything lives under /opt/android-sdk and /opt/wzp so nothing leaks into $HOME.
|
||||||
|
# =============================================================================
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||||
|
NDK_VERSION="26.1.10909125"
|
||||||
|
ANDROID_API=34
|
||||||
|
ANDROID_API_TAURI=36
|
||||||
|
BUILD_TOOLS_TAURI="35.0.0"
|
||||||
|
ANDROID_HOME=/opt/android-sdk
|
||||||
|
WZP_DIR=/opt/wzp
|
||||||
|
GIT_REPO="ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git"
|
||||||
|
GIT_BRANCH="feat/desktop-audio-rewrite"
|
||||||
|
|
||||||
|
export DEBIAN_FRONTEND=noninteractive
|
||||||
|
export ANDROID_HOME ANDROID_NDK_HOME="$ANDROID_HOME/ndk/$NDK_VERSION"
|
||||||
|
export NDK_HOME="$ANDROID_NDK_HOME"
|
||||||
|
export PATH="$ANDROID_HOME/cmdline-tools/latest/bin:$ANDROID_HOME/platform-tools:/root/.cargo/bin:$PATH"
|
||||||
|
|
||||||
|
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||||
|
log() { echo -e "\n\033[1;36m[prep-linux-mint]\033[0m $*"; }
|
||||||
|
die() { notify "wzp prep-linux-mint FAILED: $1"; echo "FATAL: $1" >&2; exit 1; }
|
||||||
|
|
||||||
|
trap 'die "line $LINENO"' ERR
|
||||||
|
|
||||||
|
notify "wzp prep-linux-mint STARTED on $(hostname) ($(whoami))"
|
||||||
|
|
||||||
|
# ─── 1. Base packages ────────────────────────────────────────────────────────
|
||||||
|
log "Installing base packages..."
|
||||||
|
apt-get update -qq
|
||||||
|
apt-get install -y --no-install-recommends \
|
||||||
|
build-essential \
|
||||||
|
ca-certificates \
|
||||||
|
cmake \
|
||||||
|
curl \
|
||||||
|
file \
|
||||||
|
git \
|
||||||
|
libasound2-dev \
|
||||||
|
libc6-dev \
|
||||||
|
libssl-dev \
|
||||||
|
openjdk-17-jdk-headless \
|
||||||
|
pkg-config \
|
||||||
|
unzip \
|
||||||
|
wget \
|
||||||
|
xz-utils \
|
||||||
|
zip
|
||||||
|
|
||||||
|
# ─── 2. Android SDK + NDK ────────────────────────────────────────────────────
|
||||||
|
if [ ! -x "$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager" ]; then
|
||||||
|
log "Installing Android cmdline-tools..."
|
||||||
|
mkdir -p "$ANDROID_HOME/cmdline-tools"
|
||||||
|
cd /tmp
|
||||||
|
wget -q https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip -O cmdtools.zip
|
||||||
|
unzip -qo cmdtools.zip -d "$ANDROID_HOME/cmdline-tools"
|
||||||
|
mv "$ANDROID_HOME/cmdline-tools/cmdline-tools" "$ANDROID_HOME/cmdline-tools/latest"
|
||||||
|
rm cmdtools.zip
|
||||||
|
else
|
||||||
|
log "cmdline-tools already installed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -d "$ANDROID_HOME/ndk/$NDK_VERSION" ] || \
|
||||||
|
[ ! -d "$ANDROID_HOME/platforms/android-$ANDROID_API" ] || \
|
||||||
|
[ ! -d "$ANDROID_HOME/platforms/android-$ANDROID_API_TAURI" ]; then
|
||||||
|
log "Installing Android platforms + NDK $NDK_VERSION..."
|
||||||
|
yes | "$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager" --licenses > /dev/null 2>&1 || true
|
||||||
|
"$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager" --install \
|
||||||
|
"platforms;android-$ANDROID_API" \
|
||||||
|
"build-tools;$ANDROID_API.0.0" \
|
||||||
|
"platforms;android-$ANDROID_API_TAURI" \
|
||||||
|
"build-tools;$BUILD_TOOLS_TAURI" \
|
||||||
|
"ndk;$NDK_VERSION" \
|
||||||
|
"platform-tools" 2>&1 | grep -v '^\[' || true
|
||||||
|
else
|
||||||
|
log "Android SDK components already installed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── 3. Node.js 20 LTS ───────────────────────────────────────────────────────
|
||||||
|
if ! command -v node >/dev/null 2>&1 || ! node --version | grep -q "^v20"; then
|
||||||
|
log "Installing Node.js 20 LTS..."
|
||||||
|
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
|
||||||
|
apt-get install -y --no-install-recommends nodejs
|
||||||
|
else
|
||||||
|
log "Node.js already at $(node --version)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── 4. Rust + Android targets ───────────────────────────────────────────────
|
||||||
|
if ! command -v rustup >/dev/null 2>&1; then
|
||||||
|
log "Installing rustup..."
|
||||||
|
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable
|
||||||
|
fi
|
||||||
|
. /root/.cargo/env
|
||||||
|
|
||||||
|
log "Ensuring Rust android targets + cargo-ndk + cargo-tauri..."
|
||||||
|
rustup target add \
|
||||||
|
aarch64-linux-android \
|
||||||
|
armv7-linux-androideabi \
|
||||||
|
i686-linux-android \
|
||||||
|
x86_64-linux-android
|
||||||
|
command -v cargo-ndk >/dev/null 2>&1 || cargo install cargo-ndk
|
||||||
|
command -v cargo-tauri >/dev/null 2>&1 || cargo install tauri-cli --version "^2.0" --locked
|
||||||
|
|
||||||
|
# ─── 5. Clone the workspace ──────────────────────────────────────────────────
|
||||||
|
mkdir -p "$WZP_DIR"
|
||||||
|
cd "$WZP_DIR"
|
||||||
|
if [ -d warzonePhone/.git ]; then
|
||||||
|
log "Pulling latest on $GIT_BRANCH..."
|
||||||
|
cd warzonePhone
|
||||||
|
git fetch origin || true
|
||||||
|
git checkout "$GIT_BRANCH" 2>/dev/null || git checkout -b "$GIT_BRANCH" "origin/$GIT_BRANCH"
|
||||||
|
git reset --hard "origin/$GIT_BRANCH" || true
|
||||||
|
else
|
||||||
|
log "Cloning warzonePhone from $GIT_REPO..."
|
||||||
|
# The public repo URL needs ssh keys; if unavailable, skip and let the user sort it later
|
||||||
|
if git clone --branch "$GIT_BRANCH" "$GIT_REPO" warzonePhone 2>/dev/null; then
|
||||||
|
log " cloned ok"
|
||||||
|
else
|
||||||
|
log " clone failed (no SSH keys for $GIT_REPO — skipping, user will rsync)"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── 6. Persistent env for the user ──────────────────────────────────────────
|
||||||
|
cat > /etc/profile.d/wzp-android.sh <<ENVEOF
|
||||||
|
export ANDROID_HOME=$ANDROID_HOME
|
||||||
|
export ANDROID_NDK_HOME=$ANDROID_HOME/ndk/$NDK_VERSION
|
||||||
|
export NDK_HOME=\$ANDROID_NDK_HOME
|
||||||
|
export PATH=\$ANDROID_HOME/cmdline-tools/latest/bin:\$ANDROID_HOME/platform-tools:/root/.cargo/bin:\$PATH
|
||||||
|
ENVEOF
|
||||||
|
chmod 644 /etc/profile.d/wzp-android.sh
|
||||||
|
|
||||||
|
# ─── 7. Sanity summary ───────────────────────────────────────────────────────
|
||||||
|
log "Sanity checks:"
|
||||||
|
echo " java: $(java -version 2>&1 | head -1)"
|
||||||
|
echo " node: $(node --version)"
|
||||||
|
echo " npm: $(npm --version)"
|
||||||
|
echo " rustc: $(rustc --version)"
|
||||||
|
echo " cargo-ndk: $(cargo ndk --version 2>&1 | head -1)"
|
||||||
|
echo " cargo-tauri:$(cargo tauri --version 2>&1 | head -1)"
|
||||||
|
echo " NDK dir: $ANDROID_NDK_HOME"
|
||||||
|
echo " WZP dir: $WZP_DIR/warzonePhone"
|
||||||
|
|
||||||
|
notify "wzp prep-linux-mint DONE on $(hostname) — ready at /opt/wzp/warzonePhone"
|
||||||
|
log "All done."
|
||||||
10
skills-lock.json
Normal file
10
skills-lock.json
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"version": 1,
|
||||||
|
"skills": {
|
||||||
|
"caveman": {
|
||||||
|
"source": "JuliusBrussee/caveman",
|
||||||
|
"sourceType": "github",
|
||||||
|
"computedHash": "aa7939fc4d1fe31484090290da77f2d21e026aa4b34b329d00e6630feb985d75"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user