Compare commits
41 Commits
debug/code
...
0a05e62c7f
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0a05e62c7f | ||
|
|
b97f32ce46 | ||
|
|
d66d583583 | ||
|
|
d06cf66538 | ||
|
|
c8bcc5c974 | ||
|
|
760126b6ab | ||
|
|
53f8bf8fff | ||
|
|
b3cdad0c75 | ||
|
|
fa3c7f1cef | ||
|
|
68b56d9172 | ||
|
|
7973c8c6a3 | ||
|
|
3e9539e5da | ||
|
|
a1ccb3f390 | ||
|
|
7751439e2b | ||
|
|
20bc290c18 | ||
|
|
a8dc350a65 | ||
|
|
00fa109f07 | ||
|
|
1e40dec468 | ||
|
|
aecef0905d | ||
|
|
18f7faa279 | ||
|
|
eeb85aeac2 | ||
|
|
00b405aa87 | ||
|
|
d09e21965e | ||
|
|
97bcc79f9b | ||
|
|
264ef9c4d4 | ||
|
|
a9adb5cfd7 | ||
|
|
a39b074d6e | ||
|
|
9cab6e2347 | ||
|
|
5e93cb74f2 | ||
|
|
b56b4a759c | ||
|
|
6f99841cc7 | ||
|
|
3b0811ce2e | ||
|
|
9eed94850d | ||
|
|
5e9718aeb2 | ||
|
|
3093933602 | ||
|
|
4c6c909732 | ||
|
|
33fab9a049 | ||
|
|
31d2306915 | ||
|
|
4af7c5f94c | ||
|
|
6597b5bd86 | ||
|
|
ae9d8526dd |
72
.agents/skills/caveman/SKILL.md
Normal file
72
.agents/skills/caveman/SKILL.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
name: caveman
|
||||
description: >
|
||||
Ultra-compressed communication mode. Slash token usage ~75% by speaking like caveman
|
||||
while keeping full technical accuracy. Use when user says "caveman mode", "talk like caveman",
|
||||
"use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers
|
||||
when token efficiency is requested.
|
||||
---
|
||||
|
||||
# Caveman Mode
|
||||
|
||||
## Core Rule
|
||||
|
||||
Respond like smart caveman. Cut articles, filler, pleasantries. Keep all technical substance.
|
||||
|
||||
## Grammar
|
||||
|
||||
- Drop articles (a, an, the)
|
||||
- Drop filler (just, really, basically, actually, simply)
|
||||
- Drop pleasantries (sure, certainly, of course, happy to)
|
||||
- Short synonyms (big not extensive, fix not "implement a solution for")
|
||||
- No hedging (skip "it might be worth considering")
|
||||
- Fragments fine. No need full sentence
|
||||
- Technical terms stay exact. "Polymorphism" stays "polymorphism"
|
||||
- Code blocks unchanged. Caveman speak around code, not in code
|
||||
- Error messages quoted exact. Caveman only for explanation
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
[thing] [action] [reason]. [next step].
|
||||
```
|
||||
|
||||
Not:
|
||||
> Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by...
|
||||
|
||||
Yes:
|
||||
> Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:
|
||||
|
||||
## Examples
|
||||
|
||||
**User:** Why is my React component re-rendering?
|
||||
|
||||
**Normal (69 tokens):** "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."
|
||||
|
||||
**Caveman (19 tokens):** "New object ref each render. Inline object prop = new ref = re-render. Wrap in `useMemo`."
|
||||
|
||||
---
|
||||
|
||||
**User:** How do I set up a PostgreSQL connection pool?
|
||||
|
||||
**Caveman:**
|
||||
```
|
||||
Use `pg` pool:
|
||||
```
|
||||
```js
|
||||
const pool = new Pool({
|
||||
max: 20,
|
||||
idleTimeoutMillis: 30000,
|
||||
connectionTimeoutMillis: 2000,
|
||||
})
|
||||
```
|
||||
```
|
||||
max = concurrent connections. Keep under DB limit. idleTimeout kill stale conn.
|
||||
```
|
||||
|
||||
## Boundaries
|
||||
|
||||
- Code: write normal. Caveman English only
|
||||
- Git commits: normal
|
||||
- PR descriptions: normal
|
||||
- User say "stop caveman" or "normal mode": revert immediately
|
||||
@@ -7,6 +7,8 @@ on:
|
||||
- 'feat/*'
|
||||
tags:
|
||||
- 'v*'
|
||||
paths-ignore:
|
||||
- '.gitea/**'
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
|
||||
43
.gitea/workflows/mirror-github.yml
Normal file
43
.gitea/workflows/mirror-github.yml
Normal file
@@ -0,0 +1,43 @@
|
||||
name: Mirror to GitHub
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- 'feat/*'
|
||||
- 'feature/*'
|
||||
tags:
|
||||
- '*'
|
||||
|
||||
jobs:
|
||||
mirror:
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: catthehacker/ubuntu:act-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Push to GitHub
|
||||
env:
|
||||
GH_SSH_KEY: ${{ secrets.GH_SSH_KEY }}
|
||||
run: |
|
||||
mkdir -p ~/.ssh
|
||||
echo "${GH_SSH_KEY}" > ~/.ssh/id_ed25519
|
||||
chmod 600 ~/.ssh/id_ed25519
|
||||
ssh-keyscan github.com >> ~/.ssh/known_hosts 2>/dev/null
|
||||
|
||||
git remote add github git@github.com:manawenuz/wzp.git
|
||||
|
||||
# Push the current branch
|
||||
BRANCH="${GITHUB_REF#refs/heads/}"
|
||||
TAG="${GITHUB_REF#refs/tags/}"
|
||||
|
||||
if [ "${GITHUB_REF}" != "${GITHUB_REF#refs/tags/}" ]; then
|
||||
echo "Pushing tag: ${TAG}"
|
||||
git push github "refs/tags/${TAG}" --force
|
||||
else
|
||||
echo "Pushing branch: ${BRANCH}"
|
||||
git push github "HEAD:refs/heads/${BRANCH}" --force
|
||||
fi
|
||||
241
Cargo.lock
generated
241
Cargo.lock
generated
@@ -297,6 +297,12 @@ dependencies = [
|
||||
"tower-service",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "base16ct"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4c7f02d4ea65f2c1853089ffd8d2787bdbc63de2f0d29dedbcf8ccdfa0ccd4cf"
|
||||
|
||||
[[package]]
|
||||
name = "base64"
|
||||
version = "0.22.1"
|
||||
@@ -467,6 +473,7 @@ dependencies = [
|
||||
"iana-time-zone",
|
||||
"js-sys",
|
||||
"num-traits",
|
||||
"serde",
|
||||
"wasm-bindgen",
|
||||
"windows-link",
|
||||
]
|
||||
@@ -627,6 +634,24 @@ dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crunchy"
|
||||
version = "0.2.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "460fbee9c2c2f33933d720630a6a0bac33ba7053db5344fac858d4b8952d77d5"
|
||||
|
||||
[[package]]
|
||||
name = "crypto-bigint"
|
||||
version = "0.5.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0dc92fb57ca44df6db8059111ab3af99a63d5d0f8375d9972e319a379c6bab76"
|
||||
dependencies = [
|
||||
"generic-array",
|
||||
"rand_core 0.6.4",
|
||||
"subtle",
|
||||
"zeroize",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crypto-common"
|
||||
version = "0.1.7"
|
||||
@@ -650,6 +675,7 @@ dependencies = [
|
||||
"digest",
|
||||
"fiat-crypto",
|
||||
"rustc_version",
|
||||
"serde",
|
||||
"subtle",
|
||||
"zeroize",
|
||||
]
|
||||
@@ -816,10 +842,32 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
|
||||
dependencies = [
|
||||
"block-buffer",
|
||||
"const-oid",
|
||||
"crypto-common",
|
||||
"subtle",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dirs"
|
||||
version = "6.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c3e8aa94d75141228480295a7d0e7feb620b1a5ad9f12bc40be62411e38cce4e"
|
||||
dependencies = [
|
||||
"dirs-sys",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dirs-sys"
|
||||
version = "0.5.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e01a3366d27ee9890022452ee61b2b63a67e6f13f58900b651ff5665f0bb1fab"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"option-ext",
|
||||
"redox_users",
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "displaydoc"
|
||||
version = "0.2.5"
|
||||
@@ -850,6 +898,21 @@ dependencies = [
|
||||
"rustfft",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ecdsa"
|
||||
version = "0.16.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ee27f32b5c5292967d2d4a9d7f1e0b0aed2c15daded5a60300e4abb9d8020bca"
|
||||
dependencies = [
|
||||
"der",
|
||||
"digest",
|
||||
"elliptic-curve",
|
||||
"rfc6979",
|
||||
"serdect",
|
||||
"signature",
|
||||
"spki",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ed25519"
|
||||
version = "2.2.3"
|
||||
@@ -857,6 +920,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "115531babc129696a58c64a4fef0a8bf9e9698629fb97e9e40767d235cfbcd53"
|
||||
dependencies = [
|
||||
"pkcs8",
|
||||
"serde",
|
||||
"signature",
|
||||
]
|
||||
|
||||
@@ -881,6 +945,26 @@ version = "1.15.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
|
||||
|
||||
[[package]]
|
||||
name = "elliptic-curve"
|
||||
version = "0.13.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b5e6043086bf7973472e0c7dff2142ea0b680d30e18d9cc40f267efbf222bd47"
|
||||
dependencies = [
|
||||
"base16ct",
|
||||
"crypto-bigint",
|
||||
"digest",
|
||||
"ff",
|
||||
"generic-array",
|
||||
"group",
|
||||
"pkcs8",
|
||||
"rand_core 0.6.4",
|
||||
"sec1",
|
||||
"serdect",
|
||||
"subtle",
|
||||
"zeroize",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "encoding_rs"
|
||||
version = "0.8.35"
|
||||
@@ -924,6 +1008,16 @@ version = "2.3.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
|
||||
|
||||
[[package]]
|
||||
name = "ff"
|
||||
version = "0.13.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c0b50bfb653653f9ca9095b427bed08ab8d75a137839d9ad64eb11810d5b6393"
|
||||
dependencies = [
|
||||
"rand_core 0.6.4",
|
||||
"subtle",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fiat-crypto"
|
||||
version = "0.2.9"
|
||||
@@ -1084,6 +1178,7 @@ checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
|
||||
dependencies = [
|
||||
"typenum",
|
||||
"version_check",
|
||||
"zeroize",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1143,6 +1238,17 @@ version = "0.3.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0cc23270f6e1808e30a928bdc84dea0b9b4136a8bc82338574f23baf47bbd280"
|
||||
|
||||
[[package]]
|
||||
name = "group"
|
||||
version = "0.13.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f0f9ef7462f7c099f518d754361858f86d8a07af53ba9af0fe635bbccb151a63"
|
||||
dependencies = [
|
||||
"ff",
|
||||
"rand_core 0.6.4",
|
||||
"subtle",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "h2"
|
||||
version = "0.4.13"
|
||||
@@ -1626,6 +1732,21 @@ dependencies = [
|
||||
"wasm-bindgen",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "k256"
|
||||
version = "0.13.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f6e3919bbaa2945715f0bb6d3934a173d1e9a59ac23767fbaaef277265a7411b"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"ecdsa",
|
||||
"elliptic-curve",
|
||||
"once_cell",
|
||||
"serdect",
|
||||
"sha2",
|
||||
"signature",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lazy_static"
|
||||
version = "1.5.0"
|
||||
@@ -1660,6 +1781,15 @@ version = "0.2.16"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b6d2cec3eae94f9f509c767b45932f1ada8350c4bdb85af2fcab4a3c14807981"
|
||||
|
||||
[[package]]
|
||||
name = "libredox"
|
||||
version = "0.1.15"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7ddbf48fd451246b1f8c2610bd3b4ac0cc6e149d89832867093ab69a17194f08"
|
||||
dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "linux-raw-sys"
|
||||
version = "0.12.1"
|
||||
@@ -1702,6 +1832,15 @@ dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matchers"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9"
|
||||
dependencies = [
|
||||
"regex-automata",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matchit"
|
||||
version = "0.7.3"
|
||||
@@ -1980,6 +2119,12 @@ dependencies = [
|
||||
"vcpkg",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "option-ext"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d"
|
||||
|
||||
[[package]]
|
||||
name = "os_str_bytes"
|
||||
version = "6.6.1"
|
||||
@@ -2320,6 +2465,17 @@ dependencies = [
|
||||
"bitflags 2.11.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "redox_users"
|
||||
version = "0.5.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a4e608c6638b9c18977b00b475ac1f28d14e84b27d8d42f70e0bf1e3dec127ac"
|
||||
dependencies = [
|
||||
"getrandom 0.2.17",
|
||||
"libredox",
|
||||
"thiserror 2.0.18",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.12.3"
|
||||
@@ -2389,6 +2545,16 @@ dependencies = [
|
||||
"web-sys",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rfc6979"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f8dd2a808d456c4a54e300a23e9f5a67e122c3024119acbfd73e3bf664491cb2"
|
||||
dependencies = [
|
||||
"hmac",
|
||||
"subtle",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ring"
|
||||
version = "0.17.14"
|
||||
@@ -2567,6 +2733,21 @@ version = "1.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
||||
|
||||
[[package]]
|
||||
name = "sec1"
|
||||
version = "0.7.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d3e97a565f76233a6003f9f5c54be1d9c5bdfa3eccfb189469f11ec4901c47dc"
|
||||
dependencies = [
|
||||
"base16ct",
|
||||
"der",
|
||||
"generic-array",
|
||||
"pkcs8",
|
||||
"serdect",
|
||||
"subtle",
|
||||
"zeroize",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "security-framework"
|
||||
version = "3.7.0"
|
||||
@@ -2671,6 +2852,16 @@ dependencies = [
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serdect"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a84f14a19e9a014bb9f4512488d9829a68e04ecabffb0f9904cd1ace94598177"
|
||||
dependencies = [
|
||||
"base16ct",
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sha1"
|
||||
version = "0.10.6"
|
||||
@@ -2724,6 +2915,7 @@ version = "2.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de"
|
||||
dependencies = [
|
||||
"digest",
|
||||
"rand_core 0.6.4",
|
||||
]
|
||||
|
||||
@@ -2937,6 +3129,15 @@ version = "0.1.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7694e1cfe791f8d31026952abf09c69ca6f6fa4e1a1229e18988f06a04a12dca"
|
||||
|
||||
[[package]]
|
||||
name = "tiny-keccak"
|
||||
version = "2.0.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2c9d3793400a45f954c52e73d068316d76b6f4e36977e3fcebb13a2721e80237"
|
||||
dependencies = [
|
||||
"crunchy",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tinystr"
|
||||
version = "0.8.2"
|
||||
@@ -3235,10 +3436,14 @@ version = "0.3.23"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319"
|
||||
dependencies = [
|
||||
"matchers",
|
||||
"nu-ansi-term",
|
||||
"once_cell",
|
||||
"regex-automata",
|
||||
"sharded-slab",
|
||||
"smallvec",
|
||||
"thread_local",
|
||||
"tracing",
|
||||
"tracing-core",
|
||||
"tracing-log",
|
||||
]
|
||||
@@ -3367,6 +3572,18 @@ version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
||||
|
||||
[[package]]
|
||||
name = "uuid"
|
||||
version = "1.23.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5ac8b6f42ead25368cf5b098aeb3dc8a1a2c05a3eee8a9a1a68c640edbfc79d9"
|
||||
dependencies = [
|
||||
"getrandom 0.4.2",
|
||||
"js-sys",
|
||||
"serde_core",
|
||||
"wasm-bindgen",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "valuable"
|
||||
version = "0.1.1"
|
||||
@@ -3406,7 +3623,28 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "warzone-protocol"
|
||||
version = "0.1.0"
|
||||
version = "0.0.38"
|
||||
dependencies = [
|
||||
"base64",
|
||||
"bincode",
|
||||
"bip39",
|
||||
"chacha20poly1305",
|
||||
"chrono",
|
||||
"curve25519-dalek",
|
||||
"ed25519-dalek",
|
||||
"hex",
|
||||
"hkdf",
|
||||
"k256",
|
||||
"rand 0.8.5",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"sha2",
|
||||
"thiserror 2.0.18",
|
||||
"tiny-keccak",
|
||||
"uuid",
|
||||
"x25519-dalek",
|
||||
"zeroize",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasi"
|
||||
@@ -4132,6 +4370,7 @@ dependencies = [
|
||||
"async-trait",
|
||||
"axum 0.7.9",
|
||||
"bytes",
|
||||
"dirs",
|
||||
"futures-util",
|
||||
"prometheus",
|
||||
"quinn",
|
||||
|
||||
@@ -19,6 +19,8 @@ import java.io.FileOutputStream
|
||||
import java.io.OutputStreamWriter
|
||||
import java.nio.ByteBuffer
|
||||
import java.nio.ByteOrder
|
||||
import java.util.concurrent.CountDownLatch
|
||||
import java.util.concurrent.TimeUnit
|
||||
import kotlin.math.pow
|
||||
import kotlin.math.sqrt
|
||||
|
||||
@@ -55,10 +57,23 @@ class AudioPipeline(private val context: Context) {
|
||||
/** Whether to attach hardware AEC. Must be set before start(). */
|
||||
var aecEnabled: Boolean = true
|
||||
/** Enable debug recording of PCM + RMS histogram to cache dir. */
|
||||
var debugRecording: Boolean = true
|
||||
var debugRecording: Boolean = false
|
||||
private var captureThread: Thread? = null
|
||||
private var playoutThread: Thread? = null
|
||||
|
||||
// DirectByteBuffers for zero-copy JNI audio transfer.
|
||||
// Allocated as class fields (NOT locals) because ART's JIT OSR
|
||||
// can null local variables when it replaces the stack frame mid-loop.
|
||||
// These survive OSR because they're on the heap.
|
||||
private val captureDirectBuf: ByteBuffer =
|
||||
ByteBuffer.allocateDirect(FRAME_SAMPLES * 2).order(ByteOrder.LITTLE_ENDIAN)
|
||||
private val playoutDirectBuf: ByteBuffer =
|
||||
ByteBuffer.allocateDirect(FRAME_SAMPLES * 2).order(ByteOrder.LITTLE_ENDIAN)
|
||||
|
||||
/** Latch counted down by each audio thread after exiting its loop.
|
||||
* stop() does NOT wait on this — teardown waits via awaitDrain(). */
|
||||
private var drainLatch: CountDownLatch? = null
|
||||
|
||||
private val debugDir: File by lazy {
|
||||
File(context.cacheDir, "wzp_debug").also { it.mkdirs() }
|
||||
}
|
||||
@@ -66,9 +81,11 @@ class AudioPipeline(private val context: Context) {
|
||||
fun start(engine: WzpEngine) {
|
||||
if (running) return
|
||||
running = true
|
||||
drainLatch = CountDownLatch(2) // one for capture, one for playout
|
||||
|
||||
captureThread = Thread({
|
||||
runCapture(engine)
|
||||
drainLatch?.countDown() // signal: capture loop exited, no more JNI calls
|
||||
// Park thread forever — exiting triggers a libcrypto TLS destructor
|
||||
// crash (SIGSEGV in OPENSSL_free) on Android when a JNI-calling thread exits.
|
||||
parkThread()
|
||||
@@ -80,6 +97,7 @@ class AudioPipeline(private val context: Context) {
|
||||
|
||||
playoutThread = Thread({
|
||||
runPlayout(engine)
|
||||
drainLatch?.countDown() // signal: playout loop exited
|
||||
parkThread()
|
||||
}, "wzp-playout").apply {
|
||||
isDaemon = true
|
||||
@@ -92,10 +110,20 @@ class AudioPipeline(private val context: Context) {
|
||||
|
||||
fun stop() {
|
||||
running = false
|
||||
// Don't join — threads are parked as daemons to avoid native TLS crash
|
||||
// Don't join threads — they are parked as daemons to avoid native TLS crash.
|
||||
// Don't null thread refs or drainLatch — teardown() needs awaitDrain().
|
||||
Log.i(TAG, "audio pipeline stopped (running=false)")
|
||||
}
|
||||
|
||||
/** Block until both audio threads have exited their loops (max 200ms).
|
||||
* After this returns, no more JNI calls to the engine will be made. */
|
||||
fun awaitDrain(): Boolean {
|
||||
val ok = drainLatch?.await(200, TimeUnit.MILLISECONDS) ?: true
|
||||
if (!ok) Log.w(TAG, "awaitDrain: audio threads did not drain in 200ms")
|
||||
captureThread = null
|
||||
playoutThread = null
|
||||
Log.i(TAG, "audio pipeline stopped")
|
||||
drainLatch = null
|
||||
return ok
|
||||
}
|
||||
|
||||
private fun applyGain(pcm: ShortArray, count: Int, db: Float) {
|
||||
@@ -206,7 +234,10 @@ class AudioPipeline(private val context: Context) {
|
||||
val read = recorder.read(pcm, 0, FRAME_SAMPLES)
|
||||
if (read > 0) {
|
||||
applyGain(pcm, read, captureGainDb)
|
||||
engine.writeAudio(pcm)
|
||||
// Zero-copy write via DirectByteBuffer (class field, survives JIT OSR)
|
||||
captureDirectBuf.clear()
|
||||
captureDirectBuf.asShortBuffer().put(pcm, 0, read)
|
||||
engine.writeAudioDirect(captureDirectBuf, read)
|
||||
|
||||
// Debug: write raw PCM + RMS
|
||||
if (pcmOut != null) {
|
||||
@@ -285,8 +316,12 @@ class AudioPipeline(private val context: Context) {
|
||||
}
|
||||
try {
|
||||
while (running) {
|
||||
val read = engine.readAudio(pcm)
|
||||
// Zero-copy read via DirectByteBuffer (class field, survives JIT OSR)
|
||||
playoutDirectBuf.clear()
|
||||
val read = engine.readAudioDirect(playoutDirectBuf, FRAME_SAMPLES)
|
||||
if (read >= FRAME_SAMPLES) {
|
||||
playoutDirectBuf.rewind()
|
||||
playoutDirectBuf.asShortBuffer().get(pcm, 0, read)
|
||||
applyGain(pcm, read, playoutGainDb)
|
||||
track.write(pcm, 0, read)
|
||||
|
||||
|
||||
@@ -28,6 +28,9 @@ class SettingsRepository(context: Context) {
|
||||
private const val KEY_PREFER_IPV6 = "prefer_ipv6"
|
||||
private const val KEY_IDENTITY_SEED = "identity_seed_hex"
|
||||
private const val KEY_AEC_ENABLED = "aec_enabled"
|
||||
private const val KEY_DEBUG_RECORDING = "debug_recording"
|
||||
private const val KEY_RECENT_ROOMS = "recent_rooms"
|
||||
private const val TOFU_PREFIX = "tofu_"
|
||||
}
|
||||
|
||||
// --- Servers ---
|
||||
@@ -118,6 +121,16 @@ class SettingsRepository(context: Context) {
|
||||
fun saveAecEnabled(enabled: Boolean) { prefs.edit().putBoolean(KEY_AEC_ENABLED, enabled).apply() }
|
||||
fun loadAecEnabled(): Boolean = prefs.getBoolean(KEY_AEC_ENABLED, true)
|
||||
|
||||
// --- Debug recording ---
|
||||
|
||||
fun saveDebugRecording(enabled: Boolean) { prefs.edit().putBoolean(KEY_DEBUG_RECORDING, enabled).apply() }
|
||||
fun loadDebugRecording(): Boolean = prefs.getBoolean(KEY_DEBUG_RECORDING, false)
|
||||
|
||||
// --- Codec choice ---
|
||||
// 0 = Opus (GOOD), 1 = Opus Low (DEGRADED), 2 = Codec2 (CATASTROPHIC)
|
||||
fun saveCodecChoice(choice: Int) { prefs.edit().putInt("codec_choice", choice).apply() }
|
||||
fun loadCodecChoice(): Int = prefs.getInt("codec_choice", 0)
|
||||
|
||||
// --- Identity seed ---
|
||||
|
||||
/**
|
||||
@@ -138,4 +151,53 @@ class SettingsRepository(context: Context) {
|
||||
fun saveSeedHex(hex: String) {
|
||||
prefs.edit().putString(KEY_IDENTITY_SEED, hex).apply()
|
||||
}
|
||||
|
||||
// --- Recent rooms ---
|
||||
|
||||
data class RecentRoom(val relay: String, val room: String)
|
||||
|
||||
fun addRecentRoom(relay: String, room: String) {
|
||||
val rooms = loadRecentRooms().toMutableList()
|
||||
rooms.removeAll { it.relay == relay && it.room == room }
|
||||
rooms.add(0, RecentRoom(relay, room))
|
||||
if (rooms.size > 5) rooms.subList(5, rooms.size).clear()
|
||||
val arr = JSONArray()
|
||||
rooms.forEach { arr.put(JSONObject().apply { put("relay", it.relay); put("room", it.room) }) }
|
||||
prefs.edit().putString(KEY_RECENT_ROOMS, arr.toString()).apply()
|
||||
}
|
||||
|
||||
fun loadRecentRooms(): List<RecentRoom> {
|
||||
val json = prefs.getString(KEY_RECENT_ROOMS, null) ?: return emptyList()
|
||||
return try {
|
||||
val arr = JSONArray(json)
|
||||
(0 until arr.length()).map { i ->
|
||||
val o = arr.getJSONObject(i)
|
||||
RecentRoom(o.getString("relay"), o.getString("room"))
|
||||
}
|
||||
} catch (_: Exception) { emptyList() }
|
||||
}
|
||||
|
||||
fun clearRecentRooms() {
|
||||
prefs.edit().remove(KEY_RECENT_ROOMS).apply()
|
||||
}
|
||||
|
||||
// --- Server fingerprint TOFU ---
|
||||
|
||||
fun saveServerFingerprint(address: String, fingerprint: String) {
|
||||
prefs.edit().putString("$TOFU_PREFIX$address", fingerprint).apply()
|
||||
}
|
||||
|
||||
fun loadServerFingerprint(address: String): String? {
|
||||
return prefs.getString("$TOFU_PREFIX$address", null)
|
||||
}
|
||||
|
||||
// --- Ping RTT cache ---
|
||||
|
||||
fun savePingRtt(address: String, rttMs: Int) {
|
||||
prefs.edit().putInt("ping_rtt_$address", rttMs).apply()
|
||||
}
|
||||
|
||||
fun loadPingRtt(address: String): Int {
|
||||
return prefs.getInt("ping_rtt_$address", -1)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -38,9 +38,12 @@ class WzpEngine(private val callback: WzpCallback) {
|
||||
* @param alias display name sent to relay for room participant list
|
||||
* @return 0 on success, negative error code on failure
|
||||
*/
|
||||
fun startCall(relayAddr: String, room: String, seedHex: String = "", token: String = "", alias: String = ""): Int {
|
||||
/**
|
||||
* @param profile 0 = Opus GOOD, 1 = Opus DEGRADED, 2 = Codec2 CATASTROPHIC
|
||||
*/
|
||||
fun startCall(relayAddr: String, room: String, seedHex: String = "", token: String = "", alias: String = "", profile: Int = 0): Int {
|
||||
check(nativeHandle != 0L) { "Engine not initialized" }
|
||||
val result = nativeStartCall(nativeHandle, relayAddr, room, seedHex, token, alias)
|
||||
val result = nativeStartCall(nativeHandle, relayAddr, room, seedHex, token, alias, profile)
|
||||
if (result == 0) {
|
||||
callback.onCallStateChanged(CallStateConstants.CONNECTING)
|
||||
} else {
|
||||
@@ -117,11 +120,31 @@ class WzpEngine(private val callback: WzpCallback) {
|
||||
return nativeReadAudio(nativeHandle, pcm)
|
||||
}
|
||||
|
||||
/**
|
||||
* Write captured PCM from a DirectByteBuffer — zero JNI array copy.
|
||||
* The buffer must be a direct ByteBuffer with native byte order containing i16 samples.
|
||||
* Called from the AudioRecord capture thread.
|
||||
*/
|
||||
fun writeAudioDirect(buffer: java.nio.ByteBuffer, sampleCount: Int): Int {
|
||||
if (nativeHandle == 0L) return 0
|
||||
return nativeWriteAudioDirect(nativeHandle, buffer, sampleCount)
|
||||
}
|
||||
|
||||
/**
|
||||
* Read decoded PCM into a DirectByteBuffer — zero JNI array copy.
|
||||
* The buffer must be a direct ByteBuffer with native byte order.
|
||||
* Called from the AudioTrack playout thread.
|
||||
*/
|
||||
fun readAudioDirect(buffer: java.nio.ByteBuffer, maxSamples: Int): Int {
|
||||
if (nativeHandle == 0L) return 0
|
||||
return nativeReadAudioDirect(nativeHandle, buffer, maxSamples)
|
||||
}
|
||||
|
||||
// -- JNI native methods --------------------------------------------------
|
||||
|
||||
private external fun nativeInit(): Long
|
||||
private external fun nativeStartCall(
|
||||
handle: Long, relay: String, room: String, seed: String, token: String, alias: String
|
||||
handle: Long, relay: String, room: String, seed: String, token: String, alias: String, profile: Int
|
||||
): Int
|
||||
private external fun nativeStopCall(handle: Long)
|
||||
private external fun nativeSetMute(handle: Long, muted: Boolean)
|
||||
@@ -130,7 +153,19 @@ class WzpEngine(private val callback: WzpCallback) {
|
||||
private external fun nativeForceProfile(handle: Long, profile: Int)
|
||||
private external fun nativeWriteAudio(handle: Long, pcm: ShortArray): Int
|
||||
private external fun nativeReadAudio(handle: Long, pcm: ShortArray): Int
|
||||
private external fun nativeWriteAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, sampleCount: Int): Int
|
||||
private external fun nativeReadAudioDirect(handle: Long, buffer: java.nio.ByteBuffer, maxSamples: Int): Int
|
||||
private external fun nativeDestroy(handle: Long)
|
||||
private external fun nativePingRelay(handle: Long, relay: String): String?
|
||||
|
||||
/**
|
||||
* Ping a relay server. Requires engine to be initialized.
|
||||
* Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or null.
|
||||
*/
|
||||
fun pingRelay(address: String): String? {
|
||||
if (nativeHandle == 0L) return null
|
||||
return nativePingRelay(nativeHandle, address)
|
||||
}
|
||||
|
||||
companion object {
|
||||
init {
|
||||
|
||||
12
android/app/src/main/java/com/wzp/net/RelayPinger.kt
Normal file
12
android/app/src/main/java/com/wzp/net/RelayPinger.kt
Normal file
@@ -0,0 +1,12 @@
|
||||
package com.wzp.net
|
||||
|
||||
// Relay pinging is now done via WzpEngine.pingRelay() (instance method).
|
||||
// This file kept for the data class only.
|
||||
|
||||
object RelayPinger {
|
||||
data class PingResult(
|
||||
val rttMs: Int,
|
||||
val reachable: Boolean,
|
||||
val serverFingerprint: String = "",
|
||||
)
|
||||
}
|
||||
@@ -12,6 +12,7 @@ import com.wzp.engine.CallStats
|
||||
import com.wzp.service.CallService
|
||||
import com.wzp.engine.WzpCallback
|
||||
import com.wzp.engine.WzpEngine
|
||||
import kotlinx.coroutines.Dispatchers
|
||||
import kotlinx.coroutines.Job
|
||||
import kotlinx.coroutines.delay
|
||||
import kotlinx.coroutines.flow.MutableStateFlow
|
||||
@@ -19,6 +20,8 @@ import kotlinx.coroutines.flow.StateFlow
|
||||
import kotlinx.coroutines.flow.asStateFlow
|
||||
import kotlinx.coroutines.isActive
|
||||
import kotlinx.coroutines.launch
|
||||
import kotlinx.coroutines.withContext
|
||||
import org.json.JSONObject
|
||||
import java.io.File
|
||||
import java.net.Inet4Address
|
||||
import java.net.Inet6Address
|
||||
@@ -26,6 +29,14 @@ import java.net.InetAddress
|
||||
|
||||
data class ServerEntry(val address: String, val label: String)
|
||||
|
||||
data class PingResult(
|
||||
val rttMs: Int,
|
||||
val serverFingerprint: String = "",
|
||||
val reachable: Boolean = rttMs > 0,
|
||||
)
|
||||
|
||||
enum class LockStatus { UNKNOWN, OFFLINE, NEW, VERIFIED, CHANGED }
|
||||
|
||||
class CallViewModel : ViewModel(), WzpCallback {
|
||||
|
||||
private var engine: WzpEngine? = null
|
||||
@@ -70,6 +81,16 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
private val _preferIPv6 = MutableStateFlow(false)
|
||||
val preferIPv6: StateFlow<Boolean> = _preferIPv6.asStateFlow()
|
||||
|
||||
private val _recentRooms = MutableStateFlow<List<com.wzp.data.SettingsRepository.RecentRoom>>(emptyList())
|
||||
val recentRooms: StateFlow<List<com.wzp.data.SettingsRepository.RecentRoom>> = _recentRooms.asStateFlow()
|
||||
|
||||
/** Ping results keyed by server address. */
|
||||
private val _pingResults = MutableStateFlow<Map<String, PingResult>>(emptyMap())
|
||||
val pingResults: StateFlow<Map<String, PingResult>> = _pingResults.asStateFlow()
|
||||
|
||||
/** Known server fingerprints (TOFU). */
|
||||
private val _knownFingerprints = MutableStateFlow<Map<String, String>>(emptyMap())
|
||||
|
||||
private val _playoutGainDb = MutableStateFlow(0f)
|
||||
val playoutGainDb: StateFlow<Float> = _playoutGainDb.asStateFlow()
|
||||
|
||||
@@ -85,6 +106,18 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
private val _aecEnabled = MutableStateFlow(true)
|
||||
val aecEnabled: StateFlow<Boolean> = _aecEnabled.asStateFlow()
|
||||
|
||||
private val _debugRecording = MutableStateFlow(false)
|
||||
val debugRecording: StateFlow<Boolean> = _debugRecording.asStateFlow()
|
||||
|
||||
// Quality profile index (matches JNI bridge profile_from_int)
|
||||
private val _codecChoice = MutableStateFlow(0)
|
||||
val codecChoice: StateFlow<Int> = _codecChoice.asStateFlow()
|
||||
|
||||
/** Key-change warning dialog state. */
|
||||
data class KeyWarningInfo(val address: String, val oldFp: String, val newFp: String)
|
||||
private val _keyWarning = MutableStateFlow<KeyWarningInfo?>(null)
|
||||
val keyWarning: StateFlow<KeyWarningInfo?> = _keyWarning.asStateFlow()
|
||||
|
||||
/** True when a call just ended and debug report can be sent. */
|
||||
private val _debugReportAvailable = MutableStateFlow(false)
|
||||
val debugReportAvailable: StateFlow<Boolean> = _debugReportAvailable.asStateFlow()
|
||||
@@ -139,6 +172,9 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
_captureGainDb.value = s.loadCaptureGain()
|
||||
_seedHex.value = s.getOrCreateSeedHex()
|
||||
_aecEnabled.value = s.loadAecEnabled()
|
||||
_debugRecording.value = s.loadDebugRecording()
|
||||
_codecChoice.value = s.loadCodecChoice()
|
||||
_recentRooms.value = s.loadRecentRooms()
|
||||
}
|
||||
|
||||
fun selectServer(index: Int) {
|
||||
@@ -182,6 +218,70 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
settings?.saveSelectedServer(_selectedServer.value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Ping all servers via native QUIC. Requires engine to be initialized.
|
||||
* Creates engine if needed, pings, keeps engine alive for subsequent Connect.
|
||||
*/
|
||||
fun pingAllServers() {
|
||||
viewModelScope.launch {
|
||||
// Ensure engine exists
|
||||
if (engine == null || engine?.isInitialized != true) {
|
||||
try {
|
||||
engine = WzpEngine(this@CallViewModel).also { it.init() }
|
||||
engineInitialized = true
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "engine init for ping failed: $e")
|
||||
return@launch
|
||||
}
|
||||
}
|
||||
val eng = engine ?: return@launch
|
||||
|
||||
val results = mutableMapOf<String, PingResult>()
|
||||
val known = mutableMapOf<String, String>()
|
||||
_servers.value.forEach { server ->
|
||||
val json = withContext(Dispatchers.IO) {
|
||||
eng.pingRelay(server.address)
|
||||
}
|
||||
if (json != null) {
|
||||
try {
|
||||
val obj = JSONObject(json)
|
||||
val rtt = obj.getInt("rtt_ms")
|
||||
val fp = obj.optString("server_fingerprint", "")
|
||||
results[server.address] = PingResult(rttMs = rtt, serverFingerprint = fp)
|
||||
// TOFU
|
||||
if (fp.isNotEmpty()) {
|
||||
val saved = settings?.loadServerFingerprint(server.address)
|
||||
if (saved == null) settings?.saveServerFingerprint(server.address, fp)
|
||||
known[server.address] = saved ?: fp
|
||||
}
|
||||
} catch (_: Exception) {}
|
||||
}
|
||||
}
|
||||
_pingResults.value = results
|
||||
_knownFingerprints.value = known
|
||||
}
|
||||
}
|
||||
|
||||
/** Load saved TOFU fingerprints. */
|
||||
fun loadSavedFingerprints() {
|
||||
val known = mutableMapOf<String, String>()
|
||||
_servers.value.forEach { server ->
|
||||
settings?.loadServerFingerprint(server.address)?.let {
|
||||
known[server.address] = it
|
||||
}
|
||||
}
|
||||
_knownFingerprints.value = known
|
||||
}
|
||||
|
||||
/** Get lock status for a server. */
|
||||
fun lockStatus(address: String): LockStatus {
|
||||
val pr = _pingResults.value[address] ?: return LockStatus.UNKNOWN
|
||||
if (!pr.reachable) return LockStatus.OFFLINE
|
||||
val known = _knownFingerprints.value[address] ?: return LockStatus.NEW
|
||||
if (pr.serverFingerprint.isEmpty()) return LockStatus.NEW
|
||||
return if (pr.serverFingerprint == known) LockStatus.VERIFIED else LockStatus.CHANGED
|
||||
}
|
||||
|
||||
fun setRoomName(name: String) {
|
||||
_roomName.value = name
|
||||
settings?.saveRoom(name)
|
||||
@@ -214,6 +314,16 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
settings?.saveAecEnabled(enabled)
|
||||
}
|
||||
|
||||
fun setDebugRecording(enabled: Boolean) {
|
||||
_debugRecording.value = enabled
|
||||
settings?.saveDebugRecording(enabled)
|
||||
}
|
||||
|
||||
fun setCodecChoice(choice: Int) {
|
||||
_codecChoice.value = choice
|
||||
settings?.saveCodecChoice(choice)
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve DNS hostname to IP address on the Kotlin/Android side,
|
||||
* since Rust's DNS resolution may not work on Android.
|
||||
@@ -254,8 +364,17 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
Log.i(TAG, "teardown: stopping audio, stopService=$stopService")
|
||||
val hadCall = audioStarted
|
||||
CallService.onStopFromNotification = null
|
||||
stopAudio()
|
||||
stopAudio() // sets running=false (non-blocking)
|
||||
stopStatsPolling()
|
||||
|
||||
// Wait for audio threads to exit their loops before destroying the engine.
|
||||
// This guarantees no in-flight JNI calls to writeAudio/readAudio.
|
||||
val drained = audioPipeline?.awaitDrain() ?: true
|
||||
if (!drained) {
|
||||
Log.w(TAG, "teardown: audio threads did not drain in time")
|
||||
}
|
||||
audioPipeline = null
|
||||
|
||||
Log.i(TAG, "teardown: stopping engine")
|
||||
try { engine?.stopCall() } catch (e: Exception) { Log.w(TAG, "stopCall err: $e") }
|
||||
try { engine?.destroy() } catch (e: Exception) { Log.w(TAG, "destroy err: $e") }
|
||||
@@ -271,13 +390,43 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
Log.i(TAG, "teardown: done")
|
||||
}
|
||||
|
||||
/** Accept the new server key and proceed with the call. */
|
||||
fun acceptNewFingerprint() {
|
||||
val info = _keyWarning.value ?: return
|
||||
_knownFingerprints.value = _knownFingerprints.value.toMutableMap().also {
|
||||
it[info.address] = info.newFp
|
||||
}
|
||||
settings?.saveServerFingerprint(info.address, info.newFp)
|
||||
_keyWarning.value = null
|
||||
startCallInternal()
|
||||
}
|
||||
|
||||
fun dismissKeyWarning() {
|
||||
_keyWarning.value = null
|
||||
}
|
||||
|
||||
fun startCall() {
|
||||
val serverEntry = _servers.value[_selectedServer.value]
|
||||
// Check for key change before connecting
|
||||
val ls = lockStatus(serverEntry.address)
|
||||
if (ls == LockStatus.CHANGED) {
|
||||
val known = _knownFingerprints.value[serverEntry.address] ?: ""
|
||||
val current = _pingResults.value[serverEntry.address]?.serverFingerprint ?: ""
|
||||
_keyWarning.value = KeyWarningInfo(serverEntry.address, known, current)
|
||||
return
|
||||
}
|
||||
startCallInternal()
|
||||
}
|
||||
|
||||
private fun startCallInternal() {
|
||||
val serverEntry = _servers.value[_selectedServer.value]
|
||||
val room = _roomName.value
|
||||
Log.i(TAG, "startCall: server=${serverEntry.address} room=$room")
|
||||
_debugReportAvailable.value = false
|
||||
_debugReportStatus.value = null
|
||||
lastCallServer = serverEntry.address
|
||||
settings?.addRecentRoom(serverEntry.address, room)
|
||||
_recentRooms.value = settings?.loadRecentRooms() ?: emptyList()
|
||||
debugReporter?.prepareForCall()
|
||||
try {
|
||||
// Teardown previous call but don't stop the service (we're about to restart it)
|
||||
@@ -300,7 +449,7 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
val seed = _seedHex.value
|
||||
val name = _alias.value
|
||||
Log.i(TAG, "startCall: resolved=$relay, alias=$name, calling engine.startCall")
|
||||
val result = engine?.startCall(relay, room, seedHex = seed, alias = name) ?: -1
|
||||
val result = engine?.startCall(relay, room, seedHex = seed, alias = name, profile = _codecChoice.value) ?: -1
|
||||
Log.i(TAG, "startCall: engine returned $result")
|
||||
// Only wire up notification callback after engine is running
|
||||
CallService.onStopFromNotification = { stopCall() }
|
||||
@@ -391,6 +540,7 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
it.playoutGainDb = _playoutGainDb.value
|
||||
it.captureGainDb = _captureGainDb.value
|
||||
it.aecEnabled = _aecEnabled.value
|
||||
it.debugRecording = _debugRecording.value
|
||||
it.start(e)
|
||||
}
|
||||
audioRouteManager?.register()
|
||||
@@ -399,8 +549,7 @@ class CallViewModel : ViewModel(), WzpCallback {
|
||||
|
||||
private fun stopAudio() {
|
||||
if (!audioStarted) return
|
||||
audioPipeline?.stop()
|
||||
audioPipeline = null
|
||||
audioPipeline?.stop() // sets running=false; DON'T null — teardown needs awaitDrain()
|
||||
audioRouteManager?.unregister()
|
||||
audioRouteManager?.setSpeaker(false)
|
||||
_isSpeaker.value = false
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
141
android/app/src/main/java/com/wzp/ui/components/Identicon.kt
Normal file
141
android/app/src/main/java/com/wzp/ui/components/Identicon.kt
Normal file
@@ -0,0 +1,141 @@
|
||||
package com.wzp.ui.components
|
||||
|
||||
import android.widget.Toast
|
||||
import androidx.compose.foundation.Canvas
|
||||
import androidx.compose.foundation.clickable
|
||||
import androidx.compose.foundation.layout.size
|
||||
import androidx.compose.foundation.shape.RoundedCornerShape
|
||||
import androidx.compose.runtime.Composable
|
||||
import androidx.compose.ui.Modifier
|
||||
import androidx.compose.ui.draw.clip
|
||||
import androidx.compose.ui.geometry.Offset
|
||||
import androidx.compose.ui.geometry.Size
|
||||
import androidx.compose.ui.graphics.Color
|
||||
import androidx.compose.ui.platform.LocalClipboardManager
|
||||
import androidx.compose.ui.platform.LocalContext
|
||||
import androidx.compose.ui.text.AnnotatedString
|
||||
import androidx.compose.ui.unit.Dp
|
||||
import androidx.compose.ui.unit.dp
|
||||
import kotlin.math.min
|
||||
|
||||
/**
|
||||
* Deterministic identicon — generates a unique 5x5 symmetric pattern
|
||||
* from a hex fingerprint string. Identical algorithm to the desktop
|
||||
* TypeScript implementation in identicon.ts.
|
||||
*/
|
||||
@Composable
|
||||
fun Identicon(
|
||||
fingerprint: String,
|
||||
size: Dp = 36.dp,
|
||||
clickToCopy: Boolean = true,
|
||||
modifier: Modifier = Modifier,
|
||||
) {
|
||||
val clipboard = LocalClipboardManager.current
|
||||
val context = LocalContext.current
|
||||
val bytes = hashBytes(fingerprint)
|
||||
val (bg, fg) = deriveColors(bytes)
|
||||
val grid = buildGrid(bytes)
|
||||
|
||||
Canvas(
|
||||
modifier = modifier
|
||||
.size(size)
|
||||
.clip(RoundedCornerShape(size * 0.12f))
|
||||
.then(
|
||||
if (clickToCopy && fingerprint.isNotEmpty()) {
|
||||
Modifier.clickable {
|
||||
clipboard.setText(AnnotatedString(fingerprint))
|
||||
Toast.makeText(context, "Copied", Toast.LENGTH_SHORT).show()
|
||||
}
|
||||
} else Modifier
|
||||
)
|
||||
) {
|
||||
val cellW = this.size.width / 5f
|
||||
val cellH = this.size.height / 5f
|
||||
|
||||
// Background
|
||||
drawRect(color = bg, size = this.size)
|
||||
|
||||
// Foreground cells
|
||||
for (y in 0 until 5) {
|
||||
for (x in 0 until 5) {
|
||||
if (grid[y][x]) {
|
||||
drawRect(
|
||||
color = fg,
|
||||
topLeft = Offset(x * cellW, y * cellH),
|
||||
size = Size(cellW, cellH),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fingerprint text that copies to clipboard on tap.
|
||||
*/
|
||||
@Composable
|
||||
fun CopyableFingerprint(
|
||||
fingerprint: String,
|
||||
modifier: Modifier = Modifier,
|
||||
style: androidx.compose.ui.text.TextStyle = androidx.compose.material3.MaterialTheme.typography.bodySmall,
|
||||
color: Color = Color.Unspecified,
|
||||
) {
|
||||
val clipboard = LocalClipboardManager.current
|
||||
val context = LocalContext.current
|
||||
|
||||
androidx.compose.material3.Text(
|
||||
text = fingerprint,
|
||||
style = style,
|
||||
color = color,
|
||||
modifier = modifier.clickable {
|
||||
if (fingerprint.isNotEmpty()) {
|
||||
clipboard.setText(AnnotatedString(fingerprint))
|
||||
Toast.makeText(context, "Fingerprint copied", Toast.LENGTH_SHORT).show()
|
||||
}
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
// --- Internal helpers (matching desktop identicon.ts) ---
|
||||
|
||||
private fun hashBytes(hex: String): List<Int> {
|
||||
val clean = hex.filter { it.isLetterOrDigit() }
|
||||
val bytes = mutableListOf<Int>()
|
||||
var i = 0
|
||||
while (i + 1 < clean.length) {
|
||||
val b = clean.substring(i, i + 2).toIntOrNull(16) ?: 0
|
||||
bytes.add(b)
|
||||
i += 2
|
||||
}
|
||||
// Pad to at least 16 bytes
|
||||
while (bytes.size < 16) bytes.add(0)
|
||||
return bytes
|
||||
}
|
||||
|
||||
private fun deriveColors(bytes: List<Int>): Pair<Color, Color> {
|
||||
val hue1 = bytes[0] * 360f / 256f
|
||||
val hue2 = (bytes[1] * 360f / 256f + 120f) % 360f
|
||||
val bg = hslToColor(hue1, 0.65f, 0.35f)
|
||||
val fg = hslToColor(hue2, 0.70f, 0.55f)
|
||||
return bg to fg
|
||||
}
|
||||
|
||||
private fun buildGrid(bytes: List<Int>): List<List<Boolean>> {
|
||||
return (0 until 5).map { y ->
|
||||
val left = (0 until 3).map { x ->
|
||||
val idx = 2 + y * 3 + x
|
||||
bytes[idx % bytes.size] > 128
|
||||
}
|
||||
// Mirror: col3 = col1, col4 = col0
|
||||
listOf(left[0], left[1], left[2], left[1], left[0])
|
||||
}
|
||||
}
|
||||
|
||||
private fun hslToColor(h: Float, s: Float, l: Float): Color {
|
||||
val k = { n: Float -> (n + h / 30f) % 12f }
|
||||
val a = s * min(l, 1f - l)
|
||||
val f = { n: Float ->
|
||||
l - a * maxOf(-1f, minOf(k(n) - 3f, minOf(9f - k(n), 1f)))
|
||||
}
|
||||
return Color(f(0f), f(8f), f(4f))
|
||||
}
|
||||
@@ -1,5 +1,6 @@
|
||||
package com.wzp.ui.settings
|
||||
|
||||
import androidx.compose.foundation.clickable
|
||||
import android.content.ClipData
|
||||
import android.content.ClipboardManager
|
||||
import android.content.Context
|
||||
@@ -22,6 +23,7 @@ import androidx.compose.material3.AlertDialog
|
||||
import androidx.compose.material3.Button
|
||||
import androidx.compose.material3.ButtonDefaults
|
||||
import androidx.compose.material3.Divider
|
||||
import androidx.compose.material3.RadioButton
|
||||
import androidx.compose.material3.FilledTonalButton
|
||||
import androidx.compose.material3.FilledTonalIconButton
|
||||
import androidx.compose.material3.IconButtonDefaults
|
||||
@@ -158,20 +160,30 @@ fun SettingsScreen(
|
||||
|
||||
Spacer(modifier = Modifier.height(16.dp))
|
||||
|
||||
// Fingerprint display
|
||||
// Fingerprint display with identicon
|
||||
val fingerprint = if (draftSeedHex.length >= 16) draftSeedHex.take(16).uppercase() else "Not generated"
|
||||
Text(
|
||||
text = "Fingerprint",
|
||||
style = MaterialTheme.typography.labelSmall,
|
||||
color = MaterialTheme.colorScheme.onSurfaceVariant
|
||||
)
|
||||
Text(
|
||||
text = fingerprint.chunked(4).joinToString(" "),
|
||||
style = MaterialTheme.typography.bodyMedium.copy(
|
||||
fontFamily = FontFamily.Monospace
|
||||
),
|
||||
color = MaterialTheme.colorScheme.onSurface
|
||||
)
|
||||
Row(
|
||||
verticalAlignment = Alignment.CenterVertically,
|
||||
modifier = Modifier.padding(vertical = 4.dp)
|
||||
) {
|
||||
com.wzp.ui.components.Identicon(
|
||||
fingerprint = draftSeedHex,
|
||||
size = 40.dp,
|
||||
)
|
||||
Spacer(modifier = Modifier.width(12.dp))
|
||||
com.wzp.ui.components.CopyableFingerprint(
|
||||
fingerprint = fingerprint.chunked(4).joinToString(" "),
|
||||
style = MaterialTheme.typography.bodyMedium.copy(
|
||||
fontFamily = FontFamily.Monospace
|
||||
),
|
||||
color = MaterialTheme.colorScheme.onSurface,
|
||||
)
|
||||
}
|
||||
|
||||
Spacer(modifier = Modifier.height(12.dp))
|
||||
|
||||
@@ -231,6 +243,51 @@ fun SettingsScreen(
|
||||
)
|
||||
}
|
||||
|
||||
Spacer(modifier = Modifier.height(12.dp))
|
||||
|
||||
// Quality selection — slider from best (studio 64k) to worst (codec2 1.2k) + auto
|
||||
val qualityLabels = listOf(
|
||||
"Studio 64k", "Studio 48k", "Studio 32k", "Auto",
|
||||
"Opus 24k", "Opus 6k", "Codec2 3.2k", "Codec2 1.2k"
|
||||
)
|
||||
// Map slider position to JNI profile int:
|
||||
// 0=Studio64k(6), 1=Studio48k(5), 2=Studio32k(4), 3=Auto(7),
|
||||
// 4=Opus24k(0), 5=Opus6k(1), 6=Codec2_3.2k(3), 7=Codec2_1.2k(2)
|
||||
val sliderToProfile = intArrayOf(6, 5, 4, 7, 0, 1, 3, 2)
|
||||
val profileToSlider = mapOf(6 to 0, 5 to 1, 4 to 2, 7 to 3, 0 to 4, 1 to 5, 3 to 6, 2 to 7)
|
||||
val qualityColors = listOf(
|
||||
Color(0xFF22C55E), Color(0xFF4ADE80), Color(0xFF86EFAC), Color(0xFFA3E635),
|
||||
Color(0xFFA3E635), Color(0xFFFACC15), Color(0xFFE97320), Color(0xFF991B1B)
|
||||
)
|
||||
val currentCodec by viewModel.codecChoice.collectAsState()
|
||||
val sliderPos = profileToSlider[currentCodec] ?: 3
|
||||
Text("Quality", style = MaterialTheme.typography.bodyMedium)
|
||||
Text(
|
||||
text = "Decode always accepts all codecs",
|
||||
style = MaterialTheme.typography.bodySmall,
|
||||
color = MaterialTheme.colorScheme.onSurfaceVariant
|
||||
)
|
||||
Spacer(modifier = Modifier.height(4.dp))
|
||||
Text(
|
||||
text = qualityLabels[sliderPos],
|
||||
style = MaterialTheme.typography.titleMedium.copy(fontWeight = FontWeight.Bold),
|
||||
color = qualityColors[sliderPos]
|
||||
)
|
||||
Slider(
|
||||
value = sliderPos.toFloat(),
|
||||
onValueChange = { viewModel.setCodecChoice(sliderToProfile[it.toInt()]) },
|
||||
valueRange = 0f..7f,
|
||||
steps = 6,
|
||||
modifier = Modifier.fillMaxWidth()
|
||||
)
|
||||
Row(
|
||||
modifier = Modifier.fillMaxWidth(),
|
||||
horizontalArrangement = Arrangement.SpaceBetween
|
||||
) {
|
||||
Text("Best", style = MaterialTheme.typography.labelSmall, color = Color(0xFF22C55E))
|
||||
Text("Lowest", style = MaterialTheme.typography.labelSmall, color = Color(0xFF991B1B))
|
||||
}
|
||||
|
||||
Spacer(modifier = Modifier.height(24.dp))
|
||||
Divider()
|
||||
Spacer(modifier = Modifier.height(16.dp))
|
||||
|
||||
@@ -17,7 +17,7 @@ wzp-crypto = { workspace = true }
|
||||
wzp-transport = { workspace = true }
|
||||
tokio = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
tracing-subscriber = { workspace = true }
|
||||
tracing-subscriber = { workspace = true, features = ["env-filter"] }
|
||||
bytes = { workspace = true }
|
||||
serde = { workspace = true }
|
||||
serde_json = "1"
|
||||
|
||||
@@ -1,91 +1,128 @@
|
||||
//! Lock-free SPSC ring buffers for audio PCM transfer between
|
||||
//! Kotlin AudioRecord/AudioTrack threads and the Rust engine.
|
||||
//! Lock-free SPSC ring buffer — "Reader-Detects-Lap" architecture.
|
||||
//!
|
||||
//! These use a simple spin-free design: the producer writes and advances
|
||||
//! a write cursor, the consumer reads and advances a read cursor.
|
||||
//! Both cursors are atomic so no mutex is needed.
|
||||
//! SPSC invariant: the producer ONLY writes `write_pos`, the consumer
|
||||
//! ONLY writes `read_pos`. Neither thread touches the other's cursor.
|
||||
//!
|
||||
//! On overflow (writer laps the reader), the writer simply overwrites
|
||||
//! old buffer data. The reader detects the lap via `available() >
|
||||
//! RING_CAPACITY` and snaps its own `read_pos` forward.
|
||||
//!
|
||||
//! Capacity is a power of 2 for bitmask indexing (no modulo).
|
||||
|
||||
use std::sync::atomic::{AtomicUsize, Ordering};
|
||||
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
|
||||
|
||||
/// Ring buffer capacity in i16 samples.
|
||||
/// 960 samples * 10 frames = ~200ms of audio at 48kHz mono.
|
||||
const RING_CAPACITY: usize = 960 * 10;
|
||||
/// Ring buffer capacity — power of 2 for bitmask indexing.
|
||||
/// 16384 samples = 341.3ms at 48kHz mono. 70% more headroom
|
||||
/// than the previous 9600 (200ms) for surviving Android GC pauses.
|
||||
const RING_CAPACITY: usize = 16384; // 2^14
|
||||
const RING_MASK: usize = RING_CAPACITY - 1;
|
||||
|
||||
/// Lock-free single-producer single-consumer ring buffer for i16 PCM samples.
|
||||
pub struct AudioRing {
|
||||
buf: Box<[i16; RING_CAPACITY]>,
|
||||
buf: Box<[i16]>,
|
||||
/// Monotonically increasing write cursor. ONLY written by producer.
|
||||
write_pos: AtomicUsize,
|
||||
/// Monotonically increasing read cursor. ONLY written by consumer.
|
||||
read_pos: AtomicUsize,
|
||||
/// Incremented by reader when it detects it was lapped (overflow).
|
||||
overflow_count: AtomicU64,
|
||||
/// Incremented by reader when ring is empty (underrun).
|
||||
underrun_count: AtomicU64,
|
||||
}
|
||||
|
||||
// SAFETY: AudioRing is designed for SPSC — one thread writes, one reads.
|
||||
// The atomics ensure visibility. The buffer itself is never accessed
|
||||
// from the same index by both threads simultaneously because the
|
||||
// producer only writes to positions between write_pos and read_pos,
|
||||
// and the consumer only reads from positions between read_pos and write_pos.
|
||||
// SAFETY: AudioRing is SPSC — one thread writes (producer), one reads (consumer).
|
||||
// The producer only writes write_pos. The consumer only writes read_pos.
|
||||
// Neither thread writes the other's cursor. Buffer indices are derived from
|
||||
// the owning thread's cursor, ensuring no concurrent access to the same index.
|
||||
unsafe impl Send for AudioRing {}
|
||||
unsafe impl Sync for AudioRing {}
|
||||
|
||||
impl AudioRing {
|
||||
pub fn new() -> Self {
|
||||
debug_assert!(RING_CAPACITY.is_power_of_two());
|
||||
Self {
|
||||
buf: Box::new([0i16; RING_CAPACITY]),
|
||||
buf: vec![0i16; RING_CAPACITY].into_boxed_slice(),
|
||||
write_pos: AtomicUsize::new(0),
|
||||
read_pos: AtomicUsize::new(0),
|
||||
overflow_count: AtomicU64::new(0),
|
||||
underrun_count: AtomicU64::new(0),
|
||||
}
|
||||
}
|
||||
|
||||
/// Number of samples available to read.
|
||||
/// Number of samples available to read (clamped to capacity).
|
||||
pub fn available(&self) -> usize {
|
||||
let w = self.write_pos.load(Ordering::Acquire);
|
||||
let r = self.read_pos.load(Ordering::Acquire);
|
||||
w.wrapping_sub(r)
|
||||
let r = self.read_pos.load(Ordering::Relaxed);
|
||||
w.wrapping_sub(r).min(RING_CAPACITY)
|
||||
}
|
||||
|
||||
/// Number of samples that can be written without overwriting.
|
||||
/// Number of samples that can be written without overwriting unread data.
|
||||
pub fn free_space(&self) -> usize {
|
||||
RING_CAPACITY - self.available()
|
||||
RING_CAPACITY.saturating_sub(self.available())
|
||||
}
|
||||
|
||||
/// Write samples into the ring. Returns number of samples written.
|
||||
/// Drops oldest samples if the ring is full.
|
||||
///
|
||||
/// If the ring is full, old data is silently overwritten. The reader
|
||||
/// will detect the lap and self-correct. The writer NEVER touches
|
||||
/// `read_pos` — this is the key invariant that prevents cursor desync.
|
||||
pub fn write(&self, samples: &[i16]) -> usize {
|
||||
let w = self.write_pos.load(Ordering::Relaxed);
|
||||
let count = samples.len().min(RING_CAPACITY);
|
||||
let w = self.write_pos.load(Ordering::Relaxed);
|
||||
|
||||
for i in 0..count {
|
||||
let idx = (w + i) % RING_CAPACITY;
|
||||
// SAFETY: We're the only writer, and the reader won't read
|
||||
// past read_pos which we haven't advanced past yet.
|
||||
unsafe {
|
||||
let ptr = self.buf.as_ptr() as *mut i16;
|
||||
*ptr.add(idx) = samples[i];
|
||||
*ptr.add((w + i) & RING_MASK) = samples[i];
|
||||
}
|
||||
}
|
||||
|
||||
self.write_pos.store(w.wrapping_add(count), Ordering::Release);
|
||||
|
||||
// If we overwrote unread data, advance read_pos
|
||||
if self.available() > RING_CAPACITY {
|
||||
let new_read = self.write_pos.load(Ordering::Relaxed).wrapping_sub(RING_CAPACITY);
|
||||
self.read_pos.store(new_read, Ordering::Release);
|
||||
}
|
||||
|
||||
count
|
||||
}
|
||||
|
||||
/// Read samples from the ring into `out`. Returns number of samples read.
|
||||
///
|
||||
/// If the writer has lapped the reader (overflow), `read_pos` is snapped
|
||||
/// forward to the oldest valid data. This is safe because only the
|
||||
/// reader thread writes `read_pos`.
|
||||
pub fn read(&self, out: &mut [i16]) -> usize {
|
||||
let avail = self.available();
|
||||
let count = out.len().min(avail);
|
||||
let w = self.write_pos.load(Ordering::Acquire);
|
||||
let mut r = self.read_pos.load(Ordering::Relaxed);
|
||||
|
||||
let mut avail = w.wrapping_sub(r);
|
||||
|
||||
// Lap detection: writer has overwritten our unread data.
|
||||
// Snap read_pos forward to oldest valid data in the buffer.
|
||||
if avail > RING_CAPACITY {
|
||||
r = w.wrapping_sub(RING_CAPACITY);
|
||||
avail = RING_CAPACITY;
|
||||
self.overflow_count.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
let count = out.len().min(avail);
|
||||
if count == 0 {
|
||||
if w == r {
|
||||
self.underrun_count.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
let r = self.read_pos.load(Ordering::Relaxed);
|
||||
for i in 0..count {
|
||||
let idx = (r + i) % RING_CAPACITY;
|
||||
out[i] = unsafe { *self.buf.as_ptr().add(idx) };
|
||||
out[i] = unsafe { *self.buf.as_ptr().add((r + i) & RING_MASK) };
|
||||
}
|
||||
|
||||
self.read_pos.store(r.wrapping_add(count), Ordering::Release);
|
||||
count
|
||||
}
|
||||
|
||||
/// Number of overflow events (reader was lapped by writer).
|
||||
pub fn overflow_count(&self) -> u64 {
|
||||
self.overflow_count.load(Ordering::Relaxed)
|
||||
}
|
||||
|
||||
/// Number of underrun events (reader found empty buffer).
|
||||
pub fn underrun_count(&self) -> u64 {
|
||||
self.underrun_count.load(Ordering::Relaxed)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,8 +16,6 @@ use std::time::Instant;
|
||||
use bytes::Bytes;
|
||||
use tracing::{error, info, warn};
|
||||
use wzp_codec::agc::AutoGainControl;
|
||||
use wzp_codec::opus_dec::OpusDecoder;
|
||||
use wzp_codec::opus_enc::OpusEncoder;
|
||||
use wzp_crypto::{KeyExchange, WarzoneKeyExchange};
|
||||
use wzp_fec::{RaptorQFecDecoder, RaptorQFecEncoder};
|
||||
use wzp_proto::{
|
||||
@@ -29,12 +27,19 @@ use crate::audio_ring::AudioRing;
|
||||
use crate::commands::EngineCommand;
|
||||
use crate::stats::{CallState, CallStats};
|
||||
|
||||
/// Opus frame size at 48kHz mono, 20ms = 960 samples.
|
||||
const FRAME_SAMPLES: usize = 960;
|
||||
/// Max frame size at 48kHz mono (40ms = 1920 samples, for Codec2/Opus6k).
|
||||
const MAX_FRAME_SAMPLES: usize = 1920;
|
||||
|
||||
/// Compute frame samples at 48kHz for a given profile.
|
||||
fn frame_samples_for(profile: &QualityProfile) -> usize {
|
||||
(profile.frame_duration_ms as usize) * 48 // 48000 / 1000
|
||||
}
|
||||
|
||||
/// Configuration to start a call.
|
||||
pub struct CallStartConfig {
|
||||
pub profile: QualityProfile,
|
||||
/// When true, use the relay's chosen_profile from CallAnswer instead of local profile.
|
||||
pub auto_profile: bool,
|
||||
pub relay_addr: String,
|
||||
pub room: String,
|
||||
pub auth_token: Vec<u8>,
|
||||
@@ -46,6 +51,7 @@ impl Default for CallStartConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
profile: QualityProfile::GOOD,
|
||||
auto_profile: false,
|
||||
relay_addr: String::new(),
|
||||
room: String::new(),
|
||||
auth_token: Vec::new(),
|
||||
@@ -123,6 +129,7 @@ impl WzpEngine {
|
||||
let room = config.room.clone();
|
||||
let identity_seed = config.identity_seed;
|
||||
let profile = config.profile;
|
||||
let auto_profile = config.auto_profile;
|
||||
let alias = config.alias.clone();
|
||||
let state = self.state.clone();
|
||||
|
||||
@@ -131,7 +138,7 @@ impl WzpEngine {
|
||||
|
||||
let state_clone = state.clone();
|
||||
runtime.block_on(async move {
|
||||
if let Err(e) = run_call(relay_addr, &room, &identity_seed, profile, alias.as_deref(), state_clone).await
|
||||
if let Err(e) = run_call(relay_addr, &room, &identity_seed, profile, auto_profile, alias.as_deref(), state_clone).await
|
||||
{
|
||||
error!("call failed: {e}");
|
||||
}
|
||||
@@ -169,6 +176,53 @@ impl WzpEngine {
|
||||
info!("stop_call: done");
|
||||
}
|
||||
|
||||
/// Ping a relay — same pattern as start_call (creates runtime on calling thread).
|
||||
/// Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or error.
|
||||
pub fn ping_relay(&self, address: &str) -> Result<String, anyhow::Error> {
|
||||
let addr: SocketAddr = address.parse()?;
|
||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
|
||||
let rt = tokio::runtime::Builder::new_current_thread()
|
||||
.enable_all()
|
||||
.build()?;
|
||||
|
||||
let result = rt.block_on(async {
|
||||
let bind: SocketAddr = "0.0.0.0:0".parse().unwrap();
|
||||
let endpoint = wzp_transport::create_endpoint(bind, None)?;
|
||||
let client_cfg = wzp_transport::client_config();
|
||||
let start = Instant::now();
|
||||
|
||||
let conn_result = tokio::time::timeout(
|
||||
std::time::Duration::from_secs(3),
|
||||
wzp_transport::connect(&endpoint, addr, "ping", client_cfg),
|
||||
)
|
||||
.await;
|
||||
|
||||
// Always close endpoint to prevent resource leaks
|
||||
endpoint.close(0u32.into(), b"done");
|
||||
|
||||
let conn = conn_result.map_err(|_| anyhow::anyhow!("timeout"))??;
|
||||
let rtt_ms = start.elapsed().as_millis() as u64;
|
||||
let server_fp = conn
|
||||
.peer_identity()
|
||||
.and_then(|id| id.downcast::<Vec<rustls::pki_types::CertificateDer>>().ok())
|
||||
.and_then(|certs| certs.first().map(|c| {
|
||||
use std::hash::{Hash, Hasher};
|
||||
let mut h = std::collections::hash_map::DefaultHasher::new();
|
||||
c.as_ref().hash(&mut h);
|
||||
format!("{:016x}", h.finish())
|
||||
}))
|
||||
.unwrap_or_default();
|
||||
conn.close(0u32.into(), b"ping");
|
||||
|
||||
Ok::<_, anyhow::Error>(format!(r#"{{"rtt_ms":{},"server_fingerprint":"{}"}}"#, rtt_ms, server_fp))
|
||||
});
|
||||
|
||||
// Shutdown runtime cleanly with timeout
|
||||
rt.shutdown_timeout(std::time::Duration::from_millis(500));
|
||||
result
|
||||
}
|
||||
|
||||
pub fn set_mute(&self, muted: bool) {
|
||||
self.state.muted.store(muted, Ordering::Relaxed);
|
||||
}
|
||||
@@ -183,6 +237,9 @@ impl WzpEngine {
|
||||
stats.duration_secs = start.elapsed().as_secs_f64();
|
||||
}
|
||||
stats.audio_level = self.state.audio_level_rms.load(Ordering::Relaxed);
|
||||
stats.playout_overflows = self.state.playout_ring.overflow_count();
|
||||
stats.playout_underruns = self.state.playout_ring.underrun_count();
|
||||
stats.capture_overflows = self.state.capture_ring.overflow_count();
|
||||
stats
|
||||
}
|
||||
|
||||
@@ -224,6 +281,7 @@ async fn run_call(
|
||||
room: &str,
|
||||
identity_seed: &[u8; 32],
|
||||
profile: QualityProfile,
|
||||
auto_profile: bool,
|
||||
alias: Option<&str>,
|
||||
state: Arc<EngineState>,
|
||||
) -> Result<(), anyhow::Error> {
|
||||
@@ -258,6 +316,9 @@ async fn run_call(
|
||||
ephemeral_pub,
|
||||
signature,
|
||||
supported_profiles: vec![
|
||||
QualityProfile::STUDIO_64K,
|
||||
QualityProfile::STUDIO_48K,
|
||||
QualityProfile::STUDIO_32K,
|
||||
QualityProfile::GOOD,
|
||||
QualityProfile::DEGRADED,
|
||||
QualityProfile::CATASTROPHIC,
|
||||
@@ -272,8 +333,8 @@ async fn run_call(
|
||||
.await?
|
||||
.ok_or_else(|| anyhow::anyhow!("connection closed before CallAnswer"))?;
|
||||
|
||||
let relay_ephemeral_pub = match answer {
|
||||
SignalMessage::CallAnswer { ephemeral_pub, .. } => ephemeral_pub,
|
||||
let (relay_ephemeral_pub, chosen_profile) = match answer {
|
||||
SignalMessage::CallAnswer { ephemeral_pub, chosen_profile, .. } => (ephemeral_pub, chosen_profile),
|
||||
other => {
|
||||
return Err(anyhow::anyhow!(
|
||||
"expected CallAnswer, got {:?}",
|
||||
@@ -282,19 +343,25 @@ async fn run_call(
|
||||
}
|
||||
};
|
||||
|
||||
// Auto mode: use the relay's chosen profile instead of the local preference
|
||||
let profile = if auto_profile {
|
||||
info!(chosen = ?chosen_profile.codec, "auto mode: using relay's chosen profile");
|
||||
chosen_profile
|
||||
} else {
|
||||
profile
|
||||
};
|
||||
|
||||
let _session = kx.derive_session(&relay_ephemeral_pub)?;
|
||||
info!("handshake complete, call active");
|
||||
info!(codec = ?profile.codec, "handshake complete, call active");
|
||||
|
||||
{
|
||||
let mut stats = state.stats.lock().unwrap();
|
||||
stats.state = CallState::Active;
|
||||
}
|
||||
|
||||
// Initialize Opus codec
|
||||
let mut encoder =
|
||||
OpusEncoder::new(profile).map_err(|e| anyhow::anyhow!("opus encoder init: {e}"))?;
|
||||
let mut decoder =
|
||||
OpusDecoder::new(profile).map_err(|e| anyhow::anyhow!("opus decoder init: {e}"))?;
|
||||
// Initialize codec (Opus or Codec2 based on profile)
|
||||
let mut encoder = wzp_codec::create_encoder(profile);
|
||||
let mut decoder = wzp_codec::create_decoder(profile);
|
||||
|
||||
// Initialize FEC encoder/decoder
|
||||
let mut fec_enc = wzp_fec::create_encoder(&profile);
|
||||
@@ -304,18 +371,22 @@ async fn run_call(
|
||||
let mut capture_agc = AutoGainControl::new();
|
||||
let mut playout_agc = AutoGainControl::new();
|
||||
|
||||
let frame_samples = frame_samples_for(&profile);
|
||||
info!(
|
||||
codec = ?profile.codec,
|
||||
fec_ratio = profile.fec_ratio,
|
||||
frames_per_block = profile.frames_per_block,
|
||||
"codec + FEC + AGC initialized (48kHz mono, 20ms frames)"
|
||||
frame_ms = profile.frame_duration_ms,
|
||||
frame_samples,
|
||||
"codec + FEC + AGC initialized"
|
||||
);
|
||||
|
||||
let seq = AtomicU16::new(0);
|
||||
let ts = AtomicU32::new(0);
|
||||
let transport_recv = transport.clone();
|
||||
|
||||
// Pre-allocate buffers
|
||||
let mut capture_buf = vec![0i16; FRAME_SAMPLES];
|
||||
// Pre-allocate buffers (sized for current profile)
|
||||
let mut capture_buf = vec![0i16; frame_samples];
|
||||
let mut encode_buf = vec![0u8; encoder.max_frame_bytes()];
|
||||
let mut frame_in_block: u8 = 0;
|
||||
let mut block_id: u8 = 0;
|
||||
@@ -333,19 +404,25 @@ async fn run_call(
|
||||
let mut last_stats_log = Instant::now();
|
||||
let mut frames_sent: u64 = 0;
|
||||
let mut frames_dropped: u64 = 0;
|
||||
// Per-step timing accumulators (reset every stats log)
|
||||
let mut t_agc_us: u64 = 0;
|
||||
let mut t_opus_us: u64 = 0;
|
||||
let mut t_fec_us: u64 = 0;
|
||||
let mut t_send_us: u64 = 0;
|
||||
let mut t_frames: u64 = 0;
|
||||
loop {
|
||||
if !state.running.load(Ordering::Relaxed) {
|
||||
break;
|
||||
}
|
||||
|
||||
let avail = state.capture_ring.available();
|
||||
if avail < FRAME_SAMPLES {
|
||||
if avail < frame_samples {
|
||||
tokio::time::sleep(std::time::Duration::from_millis(5)).await;
|
||||
continue;
|
||||
}
|
||||
|
||||
let read = state.capture_ring.read(&mut capture_buf);
|
||||
if read < FRAME_SAMPLES {
|
||||
if read < frame_samples {
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -356,9 +433,12 @@ async fn run_call(
|
||||
}
|
||||
|
||||
// AGC: normalize capture volume before encoding
|
||||
let t0 = Instant::now();
|
||||
capture_agc.process_frame(&mut capture_buf);
|
||||
t_agc_us += t0.elapsed().as_micros() as u64;
|
||||
|
||||
// Opus encode
|
||||
let t0 = Instant::now();
|
||||
let encoded_len = match encoder.encode(&capture_buf, &mut encode_buf) {
|
||||
Ok(n) => n,
|
||||
Err(e) => {
|
||||
@@ -366,11 +446,12 @@ async fn run_call(
|
||||
continue;
|
||||
}
|
||||
};
|
||||
t_opus_us += t0.elapsed().as_micros() as u64;
|
||||
let encoded = &encode_buf[..encoded_len];
|
||||
|
||||
// Build source packet
|
||||
let s = seq.fetch_add(1, Ordering::Relaxed);
|
||||
let t = ts.fetch_add(FRAME_SAMPLES as u32, Ordering::Relaxed);
|
||||
let t = ts.fetch_add(frame_samples as u32, Ordering::Relaxed);
|
||||
|
||||
let source_pkt = MediaPacket {
|
||||
header: MediaHeader {
|
||||
@@ -391,6 +472,7 @@ async fn run_call(
|
||||
};
|
||||
|
||||
// Send source packet — drop on error, never break
|
||||
let t0 = Instant::now();
|
||||
if let Err(e) = transport.send_media(&source_pkt).await {
|
||||
send_errors += 1;
|
||||
frames_dropped += 1;
|
||||
@@ -405,11 +487,14 @@ async fn run_call(
|
||||
last_send_error_log = Instant::now();
|
||||
}
|
||||
// Don't feed to FEC either — the source is lost
|
||||
t_send_us += t0.elapsed().as_micros() as u64;
|
||||
continue;
|
||||
}
|
||||
t_send_us += t0.elapsed().as_micros() as u64;
|
||||
frames_sent += 1;
|
||||
|
||||
// Feed encoded frame to FEC encoder
|
||||
let t0 = Instant::now();
|
||||
if let Err(e) = fec_enc.add_source_symbol(encoded) {
|
||||
warn!("fec add_source error: {e}");
|
||||
}
|
||||
@@ -466,9 +551,12 @@ async fn run_call(
|
||||
block_id = block_id.wrapping_add(1);
|
||||
frame_in_block = 0;
|
||||
}
|
||||
t_fec_us += t0.elapsed().as_micros() as u64;
|
||||
t_frames += 1;
|
||||
|
||||
// Periodic stats every 5 seconds
|
||||
if last_stats_log.elapsed().as_secs() >= 5 {
|
||||
let avg = |total: u64| if t_frames > 0 { total / t_frames } else { 0 };
|
||||
info!(
|
||||
seq = s,
|
||||
block_id,
|
||||
@@ -476,16 +564,23 @@ async fn run_call(
|
||||
frames_dropped,
|
||||
send_errors,
|
||||
ring_avail = state.capture_ring.available(),
|
||||
capture_overflows = state.capture_ring.overflow_count(),
|
||||
avg_agc_us = avg(t_agc_us),
|
||||
avg_opus_us = avg(t_opus_us),
|
||||
avg_fec_us = avg(t_fec_us),
|
||||
avg_send_us = avg(t_send_us),
|
||||
avg_total_us = avg(t_agc_us + t_opus_us + t_fec_us + t_send_us),
|
||||
"send stats"
|
||||
);
|
||||
t_agc_us = 0; t_opus_us = 0; t_fec_us = 0; t_send_us = 0; t_frames = 0;
|
||||
last_stats_log = Instant::now();
|
||||
}
|
||||
}
|
||||
info!(frames_sent, frames_dropped, send_errors, "send task ended");
|
||||
};
|
||||
|
||||
// Pre-allocate decode buffer
|
||||
let mut decode_buf = vec![0i16; FRAME_SAMPLES];
|
||||
// Pre-allocate decode buffer (max size to handle any incoming codec)
|
||||
let mut decode_buf = vec![0i16; MAX_FRAME_SAMPLES];
|
||||
|
||||
// Recv task: MediaPackets → FEC decode → Opus decode → playout ring
|
||||
let recv_task = async {
|
||||
@@ -530,7 +625,27 @@ async fn run_call(
|
||||
);
|
||||
|
||||
// Source packets: decode directly
|
||||
if !is_repair {
|
||||
if !is_repair && pkt.header.codec_id != CodecId::ComfortNoise {
|
||||
// Switch decoder to match incoming codec if different
|
||||
if pkt.header.codec_id != decoder.codec_id() {
|
||||
let switch_profile = match pkt.header.codec_id {
|
||||
CodecId::Opus24k => QualityProfile::GOOD,
|
||||
CodecId::Opus6k => QualityProfile::DEGRADED,
|
||||
CodecId::Opus32k => QualityProfile::STUDIO_32K,
|
||||
CodecId::Opus48k => QualityProfile::STUDIO_48K,
|
||||
CodecId::Opus64k => QualityProfile::STUDIO_64K,
|
||||
CodecId::Codec2_1200 => QualityProfile::CATASTROPHIC,
|
||||
CodecId::Codec2_3200 => QualityProfile {
|
||||
codec: CodecId::Codec2_3200,
|
||||
fec_ratio: 0.5,
|
||||
frame_duration_ms: 20,
|
||||
frames_per_block: 5,
|
||||
},
|
||||
other => QualityProfile { codec: other, ..QualityProfile::GOOD },
|
||||
};
|
||||
info!(from = ?decoder.codec_id(), to = ?pkt.header.codec_id, "recv: switching decoder");
|
||||
let _ = decoder.set_profile(switch_profile);
|
||||
}
|
||||
match decoder.decode(&pkt.payload, &mut decode_buf) {
|
||||
Ok(samples) => {
|
||||
playout_agc.process_frame(&mut decode_buf[..samples]);
|
||||
@@ -578,6 +693,8 @@ async fn run_call(
|
||||
recv_errors,
|
||||
max_recv_gap_ms,
|
||||
playout_avail = state.playout_ring.available(),
|
||||
playout_overflows = state.playout_ring.overflow_count(),
|
||||
playout_underruns = state.playout_ring.underrun_count(),
|
||||
"recv stats"
|
||||
);
|
||||
max_recv_gap_ms = 0;
|
||||
|
||||
@@ -21,11 +21,24 @@ unsafe fn handle_ref(handle: jlong) -> &'static mut EngineHandle {
|
||||
unsafe { &mut *(handle as *mut EngineHandle) }
|
||||
}
|
||||
|
||||
/// 7 = auto (use relay's chosen profile)
|
||||
const PROFILE_AUTO: jint = 7;
|
||||
|
||||
fn profile_from_int(value: jint) -> QualityProfile {
|
||||
match value {
|
||||
1 => QualityProfile::DEGRADED,
|
||||
2 => QualityProfile::CATASTROPHIC,
|
||||
_ => QualityProfile::GOOD,
|
||||
0 => QualityProfile::GOOD, // Opus 24k
|
||||
1 => QualityProfile::DEGRADED, // Opus 6k
|
||||
2 => QualityProfile::CATASTROPHIC, // Codec2 1.2k
|
||||
3 => QualityProfile { // Codec2 3.2k
|
||||
codec: wzp_proto::CodecId::Codec2_3200,
|
||||
fec_ratio: 0.5,
|
||||
frame_duration_ms: 20,
|
||||
frames_per_block: 5,
|
||||
},
|
||||
4 => QualityProfile::STUDIO_32K, // Opus 32k
|
||||
5 => QualityProfile::STUDIO_48K, // Opus 48k
|
||||
6 => QualityProfile::STUDIO_64K, // Opus 64k
|
||||
_ => QualityProfile::GOOD, // auto falls back to GOOD
|
||||
}
|
||||
}
|
||||
|
||||
@@ -35,11 +48,25 @@ static INIT_LOGGING: Once = Once::new();
|
||||
/// Safe to call multiple times — only the first call takes effect.
|
||||
fn init_logging() {
|
||||
INIT_LOGGING.call_once(|| {
|
||||
use tracing_subscriber::layer::SubscriberExt;
|
||||
use tracing_subscriber::util::SubscriberInitExt;
|
||||
if let Ok(layer) = tracing_android::layer("wzp_android") {
|
||||
let _ = tracing_subscriber::registry().with(layer).try_init();
|
||||
}
|
||||
// Wrap in catch_unwind — sharded_slab allocation inside
|
||||
// tracing_subscriber::registry() can crash on some Android
|
||||
// devices if scudo malloc fails during early initialization.
|
||||
let _ = std::panic::catch_unwind(|| {
|
||||
use tracing_subscriber::layer::SubscriberExt;
|
||||
use tracing_subscriber::util::SubscriberInitExt;
|
||||
use tracing_subscriber::EnvFilter;
|
||||
if let Ok(layer) = tracing_android::layer("wzp_android") {
|
||||
// Filter: INFO for our crates, WARN for everything else.
|
||||
// The jni crate emits VERBOSE logs for every method lookup
|
||||
// (~10 lines per JNI call, 100+ calls/sec) which floods logcat
|
||||
// and causes the system to kill the app.
|
||||
let filter = EnvFilter::new("warn,wzp_android=info,wzp_proto=info,wzp_transport=info,wzp_codec=info,wzp_fec=info,wzp_crypto=info");
|
||||
let _ = tracing_subscriber::registry()
|
||||
.with(layer)
|
||||
.with(filter)
|
||||
.try_init();
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
@@ -71,6 +98,7 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeStartCall(
|
||||
seed_hex_j: JString,
|
||||
token_j: JString,
|
||||
alias_j: JString,
|
||||
profile_j: jint,
|
||||
) -> jint {
|
||||
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||
let relay_addr: String = env.get_string(&relay_addr_j).map(|s| s.into()).unwrap_or_default();
|
||||
@@ -96,7 +124,8 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeStartCall(
|
||||
}
|
||||
|
||||
let config = CallStartConfig {
|
||||
profile: QualityProfile::GOOD,
|
||||
profile: profile_from_int(profile_j),
|
||||
auto_profile: profile_j == PROFILE_AUTO,
|
||||
relay_addr,
|
||||
room,
|
||||
auth_token: if token.is_empty() { Vec::new() } else { token.into_bytes() },
|
||||
@@ -209,7 +238,6 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeWriteAudio(
|
||||
return 0;
|
||||
}
|
||||
let mut buf = vec![0i16; len];
|
||||
// GetShortArrayRegion copies Java array into our buffer
|
||||
if env.get_short_array_region(&pcm, 0, &mut buf).is_err() {
|
||||
return 0;
|
||||
}
|
||||
@@ -243,6 +271,56 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeReadAudio(
|
||||
result.unwrap_or(0)
|
||||
}
|
||||
|
||||
/// Write captured PCM from a DirectByteBuffer — zero JNI array copies.
|
||||
/// The ByteBuffer must contain little-endian i16 samples.
|
||||
/// Called from the AudioRecord capture thread.
|
||||
#[unsafe(no_mangle)]
|
||||
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeWriteAudioDirect(
|
||||
env: JNIEnv,
|
||||
_class: JClass,
|
||||
handle: jlong,
|
||||
buffer: jni::objects::JByteBuffer,
|
||||
sample_count: jint,
|
||||
) -> jint {
|
||||
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||
let h = unsafe { handle_ref(handle) };
|
||||
let ptr = env.get_direct_buffer_address(&buffer).unwrap_or(std::ptr::null_mut());
|
||||
if ptr.is_null() || sample_count <= 0 {
|
||||
return 0;
|
||||
}
|
||||
let samples = unsafe {
|
||||
std::slice::from_raw_parts(ptr as *const i16, sample_count as usize)
|
||||
};
|
||||
h.engine.write_audio(samples) as jint
|
||||
}));
|
||||
result.unwrap_or(0)
|
||||
}
|
||||
|
||||
/// Read decoded PCM into a DirectByteBuffer — zero JNI array copies.
|
||||
/// The ByteBuffer will be filled with little-endian i16 samples.
|
||||
/// Called from the AudioTrack playout thread.
|
||||
#[unsafe(no_mangle)]
|
||||
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeReadAudioDirect(
|
||||
env: JNIEnv,
|
||||
_class: JClass,
|
||||
handle: jlong,
|
||||
buffer: jni::objects::JByteBuffer,
|
||||
max_samples: jint,
|
||||
) -> jint {
|
||||
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||
let h = unsafe { handle_ref(handle) };
|
||||
let ptr = env.get_direct_buffer_address(&buffer).unwrap_or(std::ptr::null_mut());
|
||||
if ptr.is_null() || max_samples <= 0 {
|
||||
return 0;
|
||||
}
|
||||
let samples = unsafe {
|
||||
std::slice::from_raw_parts_mut(ptr as *mut i16, max_samples as usize)
|
||||
};
|
||||
h.engine.read_audio(samples) as jint
|
||||
}));
|
||||
result.unwrap_or(0)
|
||||
}
|
||||
|
||||
#[unsafe(no_mangle)]
|
||||
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeDestroy(
|
||||
_env: JNIEnv,
|
||||
@@ -254,3 +332,30 @@ pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativeDestroy(
|
||||
drop(h);
|
||||
}));
|
||||
}
|
||||
|
||||
/// Ping a relay server — instance method, requires engine handle.
|
||||
/// Returns JSON `{"rtt_ms":N,"server_fingerprint":"hex"}` or null on failure.
|
||||
#[unsafe(no_mangle)]
|
||||
pub unsafe extern "system" fn Java_com_wzp_engine_WzpEngine_nativePingRelay<'a>(
|
||||
mut env: JNIEnv<'a>,
|
||||
_class: JClass,
|
||||
handle: jlong,
|
||||
relay_j: JString,
|
||||
) -> jstring {
|
||||
let result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
|
||||
let h = unsafe { handle_ref(handle) };
|
||||
let relay: String = env.get_string(&relay_j).map(|s| s.into()).unwrap_or_default();
|
||||
match h.engine.ping_relay(&relay) {
|
||||
Ok(json) => Some(json),
|
||||
Err(_) => None,
|
||||
}
|
||||
}));
|
||||
|
||||
let json = match result {
|
||||
Ok(Some(s)) => s,
|
||||
_ => return JObject::null().into_raw(),
|
||||
};
|
||||
env.new_string(&json)
|
||||
.map(|s| s.into_raw())
|
||||
.unwrap_or(JObject::null().into_raw())
|
||||
}
|
||||
|
||||
@@ -51,6 +51,12 @@ pub struct CallStats {
|
||||
pub underruns: u64,
|
||||
/// Frames recovered by FEC.
|
||||
pub fec_recovered: u64,
|
||||
/// Playout ring overflow count (reader was lapped by writer).
|
||||
pub playout_overflows: u64,
|
||||
/// Playout ring underrun count (reader found empty buffer).
|
||||
pub playout_underruns: u64,
|
||||
/// Capture ring overflow count.
|
||||
pub capture_overflows: u64,
|
||||
/// Current mic audio level (RMS of i16 samples, 0-32767).
|
||||
pub audio_level: u32,
|
||||
/// Number of participants in the room (from last RoomUpdate).
|
||||
|
||||
@@ -38,6 +38,9 @@ pub async fn perform_handshake(
|
||||
ephemeral_pub,
|
||||
signature,
|
||||
supported_profiles: vec![
|
||||
QualityProfile::STUDIO_64K,
|
||||
QualityProfile::STUDIO_48K,
|
||||
QualityProfile::STUDIO_32K,
|
||||
QualityProfile::GOOD,
|
||||
QualityProfile::DEGRADED,
|
||||
QualityProfile::CATASTROPHIC,
|
||||
|
||||
@@ -79,7 +79,7 @@ impl AudioDecoder for OpusDecoder {
|
||||
|
||||
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
|
||||
match profile.codec {
|
||||
CodecId::Opus24k | CodecId::Opus16k | CodecId::Opus6k => {
|
||||
c if c.is_opus() => {
|
||||
self.codec_id = profile.codec;
|
||||
self.frame_duration_ms = profile.frame_duration_ms;
|
||||
Ok(())
|
||||
|
||||
@@ -100,7 +100,7 @@ impl AudioEncoder for OpusEncoder {
|
||||
|
||||
fn set_profile(&mut self, profile: QualityProfile) -> Result<(), CodecError> {
|
||||
match profile.codec {
|
||||
CodecId::Opus24k | CodecId::Opus16k | CodecId::Opus6k => {
|
||||
c if c.is_opus() => {
|
||||
self.codec_id = profile.codec;
|
||||
self.frame_duration_ms = profile.frame_duration_ms;
|
||||
self.apply_bitrate(profile.codec)?;
|
||||
|
||||
@@ -18,6 +18,12 @@ pub enum CodecId {
|
||||
Codec2_1200 = 4,
|
||||
/// Comfort noise descriptor (silence suppression)
|
||||
ComfortNoise = 5,
|
||||
/// Opus at 32kbps (studio low)
|
||||
Opus32k = 6,
|
||||
/// Opus at 48kbps (studio)
|
||||
Opus48k = 7,
|
||||
/// Opus at 64kbps (studio high)
|
||||
Opus64k = 8,
|
||||
}
|
||||
|
||||
impl CodecId {
|
||||
@@ -27,6 +33,9 @@ impl CodecId {
|
||||
Self::Opus24k => 24_000,
|
||||
Self::Opus16k => 16_000,
|
||||
Self::Opus6k => 6_000,
|
||||
Self::Opus32k => 32_000,
|
||||
Self::Opus48k => 48_000,
|
||||
Self::Opus64k => 64_000,
|
||||
Self::Codec2_3200 => 3_200,
|
||||
Self::Codec2_1200 => 1_200,
|
||||
Self::ComfortNoise => 0,
|
||||
@@ -36,8 +45,7 @@ impl CodecId {
|
||||
/// Preferred frame duration in milliseconds.
|
||||
pub const fn frame_duration_ms(self) -> u8 {
|
||||
match self {
|
||||
Self::Opus24k => 20,
|
||||
Self::Opus16k => 20,
|
||||
Self::Opus24k | Self::Opus16k | Self::Opus32k | Self::Opus48k | Self::Opus64k => 20,
|
||||
Self::Opus6k => 40,
|
||||
Self::Codec2_3200 => 20,
|
||||
Self::Codec2_1200 => 40,
|
||||
@@ -48,7 +56,8 @@ impl CodecId {
|
||||
/// Sample rate expected by this codec.
|
||||
pub const fn sample_rate_hz(self) -> u32 {
|
||||
match self {
|
||||
Self::Opus24k | Self::Opus16k | Self::Opus6k => 48_000,
|
||||
Self::Opus24k | Self::Opus16k | Self::Opus6k
|
||||
| Self::Opus32k | Self::Opus48k | Self::Opus64k => 48_000,
|
||||
Self::Codec2_3200 | Self::Codec2_1200 => 8_000,
|
||||
Self::ComfortNoise => 48_000,
|
||||
}
|
||||
@@ -63,6 +72,9 @@ impl CodecId {
|
||||
3 => Some(Self::Codec2_3200),
|
||||
4 => Some(Self::Codec2_1200),
|
||||
5 => Some(Self::ComfortNoise),
|
||||
6 => Some(Self::Opus32k),
|
||||
7 => Some(Self::Opus48k),
|
||||
8 => Some(Self::Opus64k),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
@@ -71,6 +83,12 @@ impl CodecId {
|
||||
pub const fn to_wire(self) -> u8 {
|
||||
self as u8
|
||||
}
|
||||
|
||||
/// Returns true if this is an Opus variant.
|
||||
pub const fn is_opus(self) -> bool {
|
||||
matches!(self, Self::Opus6k | Self::Opus16k | Self::Opus24k
|
||||
| Self::Opus32k | Self::Opus48k | Self::Opus64k)
|
||||
}
|
||||
}
|
||||
|
||||
/// Describes the complete quality configuration for a call session.
|
||||
@@ -111,6 +129,30 @@ impl QualityProfile {
|
||||
frames_per_block: 8,
|
||||
};
|
||||
|
||||
/// Studio low: Opus 32kbps, minimal FEC.
|
||||
pub const STUDIO_32K: Self = Self {
|
||||
codec: CodecId::Opus32k,
|
||||
fec_ratio: 0.1,
|
||||
frame_duration_ms: 20,
|
||||
frames_per_block: 5,
|
||||
};
|
||||
|
||||
/// Studio: Opus 48kbps, minimal FEC.
|
||||
pub const STUDIO_48K: Self = Self {
|
||||
codec: CodecId::Opus48k,
|
||||
fec_ratio: 0.1,
|
||||
frame_duration_ms: 20,
|
||||
frames_per_block: 5,
|
||||
};
|
||||
|
||||
/// Studio high: Opus 64kbps, minimal FEC.
|
||||
pub const STUDIO_64K: Self = Self {
|
||||
codec: CodecId::Opus64k,
|
||||
fec_ratio: 0.1,
|
||||
frame_duration_ms: 20,
|
||||
frames_per_block: 5,
|
||||
};
|
||||
|
||||
/// Estimated total bandwidth in kbps including FEC overhead.
|
||||
pub fn total_bitrate_kbps(&self) -> f32 {
|
||||
let base = self.codec.bitrate_bps() as f32 / 1000.0;
|
||||
|
||||
@@ -28,6 +28,7 @@ prometheus = "0.13"
|
||||
axum = { version = "0.7", default-features = false, features = ["tokio", "http1", "ws"] }
|
||||
tower-http = { version = "0.6", features = ["fs"] }
|
||||
futures-util = "0.3"
|
||||
dirs = "6"
|
||||
|
||||
[[bin]]
|
||||
name = "wzp-relay"
|
||||
|
||||
@@ -13,7 +13,7 @@ use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use tokio::sync::Mutex;
|
||||
use tracing::{error, info};
|
||||
use tracing::{error, info, warn};
|
||||
|
||||
use wzp_proto::MediaTransport;
|
||||
use wzp_relay::config::RelayConfig;
|
||||
@@ -184,6 +184,21 @@ async fn run_downstream(
|
||||
}
|
||||
}
|
||||
|
||||
/// Detect a non-loopback IP address from local interfaces.
|
||||
/// Prefers public IPs over private (10.x, 172.16-31.x, 192.168.x).
|
||||
fn detect_public_ip() -> Option<String> {
|
||||
use std::net::UdpSocket;
|
||||
// Connect to a public address to find our outbound IP (doesn't actually send anything)
|
||||
if let Ok(socket) = UdpSocket::bind("0.0.0.0:0") {
|
||||
if socket.connect("8.8.8.8:80").is_ok() {
|
||||
if let Ok(addr) = socket.local_addr() {
|
||||
return Some(addr.ip().to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> anyhow::Result<()> {
|
||||
let config = parse_args();
|
||||
@@ -207,11 +222,51 @@ async fn main() -> anyhow::Result<()> {
|
||||
tokio::spawn(wzp_relay::metrics::serve_metrics(port, m, p, rr));
|
||||
}
|
||||
|
||||
// Generate ephemeral relay identity for crypto handshake
|
||||
let relay_seed = wzp_crypto::Seed::generate();
|
||||
// Load or generate relay identity — persisted in ~/.wzp/relay-identity
|
||||
let relay_seed = {
|
||||
let config_dir = dirs::home_dir()
|
||||
.unwrap_or_else(|| std::path::PathBuf::from("."))
|
||||
.join(".wzp");
|
||||
let identity_path = config_dir.join("relay-identity");
|
||||
if identity_path.exists() {
|
||||
if let Ok(hex) = std::fs::read_to_string(&identity_path) {
|
||||
if let Ok(s) = wzp_crypto::Seed::from_hex(hex.trim()) {
|
||||
info!("loaded relay identity from {}", identity_path.display());
|
||||
s
|
||||
} else {
|
||||
warn!("corrupt relay identity file, generating new");
|
||||
let s = wzp_crypto::Seed::generate();
|
||||
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
|
||||
let _ = std::fs::write(&identity_path, &hex);
|
||||
s
|
||||
}
|
||||
} else {
|
||||
let s = wzp_crypto::Seed::generate();
|
||||
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
|
||||
let _ = std::fs::write(&identity_path, &hex);
|
||||
s
|
||||
}
|
||||
} else {
|
||||
let s = wzp_crypto::Seed::generate();
|
||||
let _ = std::fs::create_dir_all(&config_dir);
|
||||
let hex: String = s.0.iter().map(|b| format!("{b:02x}")).collect();
|
||||
let _ = std::fs::write(&identity_path, &hex);
|
||||
info!("generated relay identity at {}", identity_path.display());
|
||||
s
|
||||
}
|
||||
};
|
||||
let relay_fp = relay_seed.derive_identity().public_identity().fingerprint;
|
||||
info!(addr = %config.listen_addr, fingerprint = %relay_fp, "WarzonePhone relay starting");
|
||||
|
||||
// Print federation hint with our public IP + listen port
|
||||
let listen_port = config.listen_addr.port();
|
||||
let public_ip = detect_public_ip();
|
||||
if let Some(ip) = &public_ip {
|
||||
info!("federation: to peer with this relay, add to peers config:");
|
||||
info!(" - url: \"{ip}:{listen_port}\"");
|
||||
info!(" fingerprint: \"{relay_fp}\"");
|
||||
}
|
||||
|
||||
let (server_config, _cert) = wzp_transport::server_config();
|
||||
let endpoint = wzp_transport::create_endpoint(config.listen_addr, Some(server_config))?;
|
||||
|
||||
@@ -299,6 +354,13 @@ async fn main() -> anyhow::Result<()> {
|
||||
|
||||
let transport = Arc::new(wzp_transport::QuinnTransport::new(connection));
|
||||
|
||||
// Ping connections: client just measures QUIC connect RTT.
|
||||
// No handshake, no streams — client closes immediately after connecting.
|
||||
if room_name == "ping" {
|
||||
info!(%addr, "ping connection (RTT probe)");
|
||||
return;
|
||||
}
|
||||
|
||||
// Probe connections use SNI "_probe" to identify themselves.
|
||||
// They skip auth + handshake and just do Ping->Pong + presence gossip.
|
||||
if room_name == "_probe" {
|
||||
|
||||
115
debug/INCIDENT-2026-04-06-art-gc-sigbus.md
Normal file
115
debug/INCIDENT-2026-04-06-art-gc-sigbus.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Incident Report: SIGBUS in ART GC During Audio Thread JNI Calls
|
||||
|
||||
**Date:** 2026-04-06
|
||||
**Severity:** High — app crash (SIGBUS) mid-call
|
||||
**Status:** Root-caused, fix proposed
|
||||
**Affects:** Android 16 (API 36) devices with concurrent mark-compact GC
|
||||
|
||||
## Summary
|
||||
|
||||
The app crashes with SIGBUS (signal 7, BUS_ADRERR) during an active call. The crash occurs in ART's garbage collector or JIT compiler, NOT in our Rust native code or AudioRing buffer. Both `wzp-capture` and `wzp-playout` Kotlin threads are affected.
|
||||
|
||||
## Crash Details
|
||||
|
||||
### Crash 1: wzp-capture (18:42, after 476s of call)
|
||||
|
||||
```
|
||||
Fatal signal 7 (SIGBUS), code 2 (BUS_ADRERR), fault addr 0x720009be38
|
||||
tid 19697 (wzp-capture), pid 17885 (com.wzp.phone)
|
||||
```
|
||||
|
||||
**Backtrace:**
|
||||
```
|
||||
#00 art::StackVisitor::WalkStack
|
||||
#01 art::Thread::VisitRoots
|
||||
#02 art::gc::collector::MarkCompact::ThreadFlipVisitor::Run
|
||||
#03 art::Thread::EnsureFlipFunctionStarted
|
||||
#04 CheckJNI::ReleasePrimitiveArrayElements ← JNI boundary
|
||||
#05 android_media_AudioRecord_readInArray ← AudioRecord.read()
|
||||
#09 com.wzp.audio.AudioPipeline.runCapture
|
||||
```
|
||||
|
||||
**Root cause:** ART's concurrent mark-compact GC (`MarkCompact::ThreadFlipVisitor`) is flipping thread roots while the capture thread is in the middle of a JNI call (`AudioRecord.read()`). The GC's `EnsureFlipFunctionStarted` triggers a stack walk that hits an invalid address.
|
||||
|
||||
### Crash 2: wzp-playout (19:17, mid-call)
|
||||
|
||||
```
|
||||
Fatal signal 7 (SIGBUS), code 2 (BUS_ADRERR), fault addr 0x225eb98
|
||||
tid 32574 (wzp-playout), pid 32479 (com.wzp.phone)
|
||||
```
|
||||
|
||||
**Backtrace:**
|
||||
```
|
||||
#00 com.wzp.audio.AudioPipeline.runPlayout ← JIT-compiled code
|
||||
#01 art_quick_osr_stub ← On-Stack Replacement
|
||||
#02 art::jit::Jit::MaybeDoOnStackReplacement
|
||||
#03-#04 art::interpreter::ExecuteSwitchImplCpp
|
||||
```
|
||||
|
||||
**Root cause:** ART's JIT compiler performed On-Stack Replacement (OSR) on the hot playout loop. The OSR stub references a code address (`0x225eb98`) that is no longer valid — likely because the GC moved the compiled code in memory during concurrent compaction.
|
||||
|
||||
## Why This Happens
|
||||
|
||||
Android 16 introduced a new **concurrent mark-compact GC** (CMC) that moves objects in memory while other threads are running. This is safe for normal Java code because ART uses read barriers. But our audio threads have specific properties that stress this:
|
||||
|
||||
1. **`Thread.MAX_PRIORITY`** — audio threads run at the highest priority, starving the GC thread of CPU time. The GC may not complete its thread-flip before the audio thread resumes.
|
||||
|
||||
2. **Tight JNI loops** — `runCapture()` and `runPlayout()` loop every 20ms calling `AudioRecord.read()` / `AudioTrack.write()` via JNI. Each JNI transition is a GC safepoint, but the thread spends most of its time in native code where the GC can't flip it.
|
||||
|
||||
3. **Long-running JIT-compiled code** — the hot loop gets JIT-compiled and may undergo OSR. If the GC compacts memory while OSR is in progress, the stub can reference stale addresses.
|
||||
|
||||
4. **Daemon threads that never exit** — our threads are parked with `Thread.sleep(Long.MAX_VALUE)` after the call ends (to avoid the libcrypto TLS destructor crash). These zombie threads accumulate GC root scan work.
|
||||
|
||||
## Evidence This Is Not Our Bug
|
||||
|
||||
| Component | Evidence |
|
||||
|-----------|---------|
|
||||
| **AudioRing** | Not in any backtrace. All crash frames are in `libart.so` (ART runtime) |
|
||||
| **Rust native code** | `libwzp_android.so` not in any crash frame |
|
||||
| **JNI bridge** | Crash happens during `ReleasePrimitiveArrayElements` (ART internal), not during our JNI calls |
|
||||
| **Timing** | Crashes after 476s and mid-call — not during init or teardown |
|
||||
|
||||
## Proposed Fix
|
||||
|
||||
### Option A: Disable concurrent GC compaction for audio threads (recommended)
|
||||
|
||||
Use `dalvik.vm.gctype` or per-thread GC pinning to prevent the mark-compact collector from moving objects referenced by audio threads.
|
||||
|
||||
**Not directly controllable from app code.** But we can reduce GC pressure:
|
||||
|
||||
### Option B: Reduce JNI transitions in audio threads
|
||||
|
||||
Instead of calling `engine.writeAudio(pcm)` / `engine.readAudio(pcm)` via JNI on every 20ms frame, batch multiple frames or use `DirectByteBuffer` to share memory without JNI array copies.
|
||||
|
||||
**Implementation:**
|
||||
- Allocate a `DirectByteBuffer` in Kotlin, share the pointer with Rust via JNI
|
||||
- Audio threads write/read directly to the buffer (no JNI call per frame)
|
||||
- Rust reads/writes from the same memory region
|
||||
- Reduces JNI transitions from 100/sec to 0/sec per audio direction
|
||||
|
||||
### Option C: Use Android's Oboe (AAudio) natively from Rust
|
||||
|
||||
Skip the Kotlin AudioRecord/AudioTrack entirely. Use Oboe (which we already have as a dependency in `wzp-android/Cargo.toml`) to create native audio streams directly from Rust. The audio callbacks run in native code with no JNI, no GC interaction, no ART.
|
||||
|
||||
This is how the project was originally designed (see `audio_android.rs` with Oboe references) before switching to Kotlin AudioRecord for simplicity.
|
||||
|
||||
**Pros:** Eliminates the entire JNI audio path. No GC interaction. Lower latency.
|
||||
**Cons:** Requires rewriting `AudioPipeline.kt` into Rust. Oboe setup is more complex.
|
||||
|
||||
### Option D: Pin audio thread objects to prevent GC movement
|
||||
|
||||
Use JNI `GetPrimitiveArrayCritical` instead of `GetShortArrayRegion` to pin the array in memory during the operation. This prevents the GC from moving the array while we're using it.
|
||||
|
||||
**Implementation:** Change `nativeWriteAudio` / `nativeReadAudio` JNI functions to use critical sections.
|
||||
|
||||
### Recommendation
|
||||
|
||||
**Short term: Option B** (DirectByteBuffer) — reduces JNI transitions without major refactoring.
|
||||
|
||||
**Long term: Option C** (Oboe from Rust) — eliminates the problem entirely. This is the architecturally correct solution and matches the original design intent.
|
||||
|
||||
## Data Files
|
||||
|
||||
- Logcat from Nothing A059 (Android 16, API 36)
|
||||
- Two crashes in the same session: 18:42 (capture, after 476s) and 19:17 (playout)
|
||||
- Both SIGBUS/BUS_ADRERR, both in ART internal frames
|
||||
175
debug/INCIDENT-2026-04-06-capture-thread-use-after-free.md
Normal file
175
debug/INCIDENT-2026-04-06-capture-thread-use-after-free.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# Incident Report: Native Crash in Capture Thread — Use-After-Free on Engine Handle
|
||||
|
||||
**Date:** 2026-04-06
|
||||
**Severity:** Critical — app crash (SIGSEGV) on call hangup
|
||||
**Status:** Root-caused, fix pending
|
||||
**Affects:** Android client only
|
||||
|
||||
## Summary
|
||||
|
||||
The app crashes with a native SIGSEGV during or shortly after call hangup. The crash occurs in JIT-compiled code inside `AudioPipeline.runCapture()`. The root cause is a use-after-free: the capture thread calls `engine.writeAudio()` via JNI after the engine's native handle has been freed by `teardown()` on the ViewModel thread.
|
||||
|
||||
## Crash Stacktrace
|
||||
|
||||
```
|
||||
04-06 13:05:42.707 F DEBUG: #09 pc 000000000250696c /memfd:jit-cache (deleted) (com.wzp.audio.AudioPipeline.runCapture+3228)
|
||||
04-06 13:05:42.707 F DEBUG: #14 pc 0000000000005270 <anonymous:730900d000> (com.wzp.audio.AudioPipeline.start$lambda$0+0)
|
||||
04-06 13:05:42.708 F DEBUG: #19 pc 00000000000044cc <anonymous:730900d000> (com.wzp.audio.AudioPipeline.$r8$lambda$0rYcivupwvyN4SgBXhsroKmTlo8+0)
|
||||
04-06 13:05:42.708 F DEBUG: #24 pc 00000000000042e4 <anonymous:730900d000> (com.wzp.audio.AudioPipeline$$ExternalSyntheticLambda0.run+0)
|
||||
```
|
||||
|
||||
This is a tombstone (signal crash), not a Java exception. The `F DEBUG` tag indicates a native crash handler (debuggerd) captured the signal.
|
||||
|
||||
## Root Cause
|
||||
|
||||
### The Race Condition
|
||||
|
||||
Two threads operate on the engine concurrently without synchronization:
|
||||
|
||||
**Thread 1: `wzp-capture` (AudioRecord thread, MAX_PRIORITY)**
|
||||
```kotlin
|
||||
// AudioPipeline.runCapture() — runs in a tight loop
|
||||
while (running) {
|
||||
val read = recorder.read(pcm, 0, FRAME_SAMPLES)
|
||||
if (read > 0) {
|
||||
engine.writeAudio(pcm) // <-- JNI call to native engine
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Thread 2: ViewModel/UI thread (normal priority)**
|
||||
```kotlin
|
||||
// CallViewModel.teardown()
|
||||
stopAudio() // sets AudioPipeline.running = false
|
||||
engine?.stopCall() // tells Rust to stop
|
||||
engine?.destroy() // frees native memory, sets nativeHandle = 0L
|
||||
engine = null
|
||||
```
|
||||
|
||||
### The Kotlin Guard is Insufficient
|
||||
|
||||
`WzpEngine.writeAudio()` has a guard:
|
||||
```kotlin
|
||||
fun writeAudio(pcm: ShortArray): Int {
|
||||
if (nativeHandle == 0L) return 0 // check
|
||||
return nativeWriteAudio(nativeHandle, pcm) // use
|
||||
}
|
||||
```
|
||||
|
||||
This is a **TOCTOU (time-of-check/time-of-use) race**:
|
||||
1. Capture thread checks `nativeHandle != 0L` → true
|
||||
2. ViewModel thread calls `destroy()`, which calls `nativeDestroy(handle)` then sets `nativeHandle = 0L`
|
||||
3. Capture thread calls `nativeWriteAudio(handle, pcm)` with the now-freed handle
|
||||
4. The JNI function dereferences `handle` as a pointer → **SIGSEGV**
|
||||
|
||||
The same race exists for `readAudio()` on the `wzp-playout` thread.
|
||||
|
||||
### Why `stopAudio()` Doesn't Prevent This
|
||||
|
||||
`AudioPipeline.stop()` sets `running = false` but does **NOT join or wait** for the threads:
|
||||
```kotlin
|
||||
fun stop() {
|
||||
running = false
|
||||
// Don't join — threads are parked as daemons to avoid native TLS crash
|
||||
captureThread = null
|
||||
playoutThread = null
|
||||
}
|
||||
```
|
||||
|
||||
The threads are intentionally not joined because of a separate bug: exiting a JNI-calling thread triggers a `SIGSEGV in OPENSSL_free` due to libcrypto TLS destructors on Android. The threads instead "park" with `Thread.sleep(Long.MAX_VALUE)` after the loop exits.
|
||||
|
||||
But the problem is the **window between `running = false` and the thread actually checking it**. The capture thread may be blocked in `recorder.read()` (which blocks for 20ms per frame) or in the middle of `engine.writeAudio()` when `destroy()` is called.
|
||||
|
||||
### Timeline of the Crash
|
||||
|
||||
```
|
||||
T=0ms ViewModel: stopAudio() → sets running=false
|
||||
T=0ms ViewModel: stopStatsPolling()
|
||||
T=0ms ViewModel: engine.stopCall() — Rust stops internal tasks
|
||||
T=1ms ViewModel: engine.destroy() — frees native memory
|
||||
↑ nativeHandle = 0L
|
||||
|
||||
T=0-20ms Capture thread: still in recorder.read() or writeAudio()
|
||||
→ if in writeAudio(), the nativeHandle check passed BEFORE destroy()
|
||||
→ JNI dereferences freed pointer → SIGSEGV
|
||||
```
|
||||
|
||||
## Affected Code
|
||||
|
||||
### Files with the race
|
||||
|
||||
| File | Line(s) | Issue |
|
||||
|------|---------|-------|
|
||||
| `android/.../WzpEngine.kt` | 107-108, 116-117 | TOCTOU on `nativeHandle` in `writeAudio()` / `readAudio()` |
|
||||
| `android/.../CallViewModel.kt` | 257-262 | `stopAudio()` + `destroy()` without waiting for audio threads to quiesce |
|
||||
| `android/.../AudioPipeline.kt` | 80-82 | `stop()` doesn't synchronize with running threads |
|
||||
|
||||
### Files with the thread parking workaround
|
||||
|
||||
| File | Line(s) | Context |
|
||||
|------|---------|---------|
|
||||
| `android/.../AudioPipeline.kt` | 57-58, 69-70 | Threads parked after loop exit to avoid libcrypto TLS crash |
|
||||
| `android/.../AudioPipeline.kt` | 96-101 | `parkThread()` — `Thread.sleep(Long.MAX_VALUE)` |
|
||||
|
||||
## Constraints for the Fix
|
||||
|
||||
1. **Cannot join audio threads** — joining triggers a separate SIGSEGV in `OPENSSL_free` when the thread's TLS destructors fire (documented in `AudioPipeline.kt` comments). The parking workaround must be preserved.
|
||||
|
||||
2. **Must guarantee no JNI calls after `destroy()`** — the native handle is a raw pointer; any dereference after free is undefined behavior.
|
||||
|
||||
3. **Must not add blocking waits on the UI thread** — `teardown()` runs on the ViewModel thread which must remain responsive.
|
||||
|
||||
4. **The `@Volatile running` flag is necessary but not sufficient** — it prevents new loop iterations but doesn't help with in-flight JNI calls.
|
||||
|
||||
5. **Both `writeAudio` and `readAudio` have the same race** — the fix must cover both the capture and playout paths.
|
||||
|
||||
## Reproduction
|
||||
|
||||
The crash is timing-dependent. It's most likely to occur when:
|
||||
- The capture thread is in the middle of a `writeAudio()` JNI call when `destroy()` is called
|
||||
- More likely on slower devices or under CPU pressure (GC, thermal throttling)
|
||||
- Can happen on every hangup, but only crashes ~10-30% of the time due to the timing window
|
||||
|
||||
## Analysis of Possible Fix Approaches
|
||||
|
||||
### Approach A: Add a synchronization gate in the JNI bridge
|
||||
|
||||
Use a `ReentrantReadWriteLock` or `AtomicBoolean` in `WzpEngine.kt`:
|
||||
- Audio threads acquire a read lock / check the flag before JNI calls
|
||||
- `destroy()` acquires a write lock / sets the flag and waits for in-flight calls to drain
|
||||
|
||||
**Pro:** Clean, solves the race directly.
|
||||
**Con:** Adding a lock to the audio hot path (every 20ms). `ReentrantReadWriteLock` is not lock-free. However, the read-lock path is uncontended 99.99% of the time (write-lock only during destroy), so contention is negligible.
|
||||
|
||||
### Approach B: Defer `destroy()` until audio threads have stopped
|
||||
|
||||
Instead of calling `destroy()` in `teardown()`, set a flag and have the audio threads call `destroy()` after they exit the loop (before parking).
|
||||
|
||||
**Pro:** No locks on hot path.
|
||||
**Con:** Complex lifecycle — which thread calls destroy? What if both threads race to destroy? Need a `CountDownLatch` or similar.
|
||||
|
||||
### Approach C: Make the JNI handle atomically invalidated
|
||||
|
||||
Use `AtomicLong` for `nativeHandle` and use `compareAndExchange` in `destroy()` + `getAndCheck` pattern in audio calls.
|
||||
|
||||
**Pro:** Lock-free.
|
||||
**Con:** Still has a TOCTOU window — the thread can load the handle, then it gets CAS'd to 0, then the thread uses the stale handle. Doesn't fully solve the race without combining with a reference count or epoch.
|
||||
|
||||
### Approach D: Introduce a destroy latch
|
||||
|
||||
Add a `CountDownLatch(1)` that audio threads wait on before parking. `teardown()` sets `running=false`, then `await`s the latch (with timeout), then calls `destroy()`. Each audio thread counts down the latch after exiting the loop.
|
||||
|
||||
Actually this needs a `CountDownLatch(2)` — one for each thread (capture + playout).
|
||||
|
||||
**Pro:** Guarantees no in-flight JNI calls at destroy time. No locks on hot path.
|
||||
**Con:** `teardown()` blocks for up to one frame duration (~20ms) waiting for threads to exit their loops. Acceptable for a hangup path.
|
||||
|
||||
### Recommendation
|
||||
|
||||
**Approach D (destroy latch)** is the cleanest. The 20ms worst-case wait is imperceptible on the hangup path, and it provides a hard guarantee that no JNI calls are in flight when `destroy()` runs. Combined with the existing `running` volatile flag, the audio threads exit their loops within one frame and count down the latch.
|
||||
|
||||
If the latch times out (e.g., AudioRecord.read() is stuck), `destroy()` proceeds anyway — the `panic::catch_unwind` in the JNI bridge will catch the invalid access as a panic rather than a SIGSEGV (though this is best-effort; a true SIGSEGV from freed memory is not catchable).
|
||||
|
||||
## Data Files
|
||||
|
||||
The crash was captured from the Nothing A059 device at 13:05:42 on 2026-04-06. The tombstone is in the device's `/data/tombstones/` directory. The logcat output shows the crash frames.
|
||||
201
docs/PRD-adaptive-quality.md
Normal file
201
docs/PRD-adaptive-quality.md
Normal file
@@ -0,0 +1,201 @@
|
||||
# PRD: Adaptive Quality Control (Auto Codec)
|
||||
|
||||
## Problem
|
||||
|
||||
When a user selects "Auto" quality, the system currently just starts at Opus 24k (GOOD) and never changes. There is no runtime adaptation — if the network degrades mid-call, audio breaks up instead of gracefully stepping down to a lower bitrate codec. Conversely, if the network is excellent, the user stays on 24k when they could have studio-quality 64k.
|
||||
|
||||
The relay already sends `QualityReport` messages with loss % and RTT, and a `QualityAdapter` exists in `call.rs` that classifies network conditions into GOOD/DEGRADED/CATASTROPHIC — but none of this is wired into the Android or desktop engines.
|
||||
|
||||
## Solution
|
||||
|
||||
Wire the existing `QualityAdapter` into both engines so that "Auto" mode continuously monitors network quality and switches codecs mid-call. The full quality range should be used:
|
||||
|
||||
```
|
||||
Excellent network → Studio 64k (best quality)
|
||||
Good network → Opus 24k (default)
|
||||
Degraded network → Opus 6k (lower bitrate, more FEC)
|
||||
Poor network → Codec2 3.2k (vocoder, heavy FEC)
|
||||
Catastrophic → Codec2 1.2k (minimum viable voice)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
Relay ──────────► │ QualityReport │ loss %, RTT, jitter
|
||||
│ (every ~1s) │
|
||||
└────────┬────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ QualityAdapter │ classify + hysteresis
|
||||
│ (3-report window) │
|
||||
└────────┬────────────┘
|
||||
│ recommend new profile
|
||||
▼
|
||||
┌──────────────┴──────────────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
┌────────────────┐ ┌────────────────┐
|
||||
│ Encoder │ │ Decoder │
|
||||
│ set_profile() │ │ (auto-switch │
|
||||
│ + FEC update │ │ already works)│
|
||||
└────────────────┘ └────────────────┘
|
||||
```
|
||||
|
||||
## Existing Infrastructure
|
||||
|
||||
### What already exists (in `crates/wzp-client/src/call.rs`)
|
||||
|
||||
1. **`QualityAdapter`** (lines 97-196):
|
||||
- Sliding window of `QualityReport` messages
|
||||
- `classify()`: loss > 15% or RTT > 200ms → CATASTROPHIC, loss > 5% or RTT > 100ms → DEGRADED, else → GOOD
|
||||
- `should_switch()`: hysteresis — requires 3 consecutive reports recommending the same profile before switching
|
||||
- Prevents oscillation between profiles
|
||||
|
||||
2. **`QualityReport`** (in `wzp-proto/src/packet.rs`):
|
||||
- Sent by relay piggy-backed on media packets
|
||||
- Fields: `loss_pct` (u8, 0-255 scaled), `rtt_4ms` (u8, RTT in 4ms units), `jitter_ms`, `bitrate_cap_kbps`
|
||||
|
||||
3. **`CallEncoder::set_profile()`** / **`CallDecoder` auto-switch**:
|
||||
- Encoder can switch codec mid-stream
|
||||
- Decoder already auto-detects incoming codec from packet headers
|
||||
|
||||
### What's missing
|
||||
|
||||
1. **QualityReport ingestion** — neither Android engine nor desktop engine reads quality reports from the relay
|
||||
2. **Profile switch loop** — no periodic check that feeds reports to `QualityAdapter` and applies recommended switches
|
||||
3. **Upward adaptation** — `QualityAdapter` only classifies into 3 tiers (GOOD/DEGRADED/CATASTROPHIC). Needs extension to recommend studio tiers when conditions are excellent (loss < 1%, RTT < 50ms)
|
||||
4. **Notification to UI** — when quality changes, the UI should show the current active codec
|
||||
|
||||
## Requirements
|
||||
|
||||
### Phase 1: Basic Adaptive (3-tier)
|
||||
|
||||
**Both Android and Desktop:**
|
||||
|
||||
1. **Ingest QualityReports**: In the recv loop, extract `quality_report` from incoming `MediaPacket`s when present. Feed to `QualityAdapter`.
|
||||
|
||||
2. **Periodic quality check**: Every 1 second (or on each QualityReport), call `adapter.should_switch(¤t_profile)`. If it returns `Some(new_profile)`:
|
||||
- Switch the encoder: `encoder.set_profile(new_profile)`
|
||||
- Update FEC encoder: `fec_enc = create_encoder(&new_profile)`
|
||||
- Update frame size if changed (e.g., 20ms → 40ms)
|
||||
- Log the switch
|
||||
|
||||
3. **Frame size adaptation on switch**: When switching from 20ms to 40ms frames (or vice versa):
|
||||
- Android: update `frame_samples` variable, resize `capture_buf`
|
||||
- Desktop: same — the send loop reads `frame_samples` dynamically
|
||||
|
||||
4. **UI indicator**: Show current active codec in the call screen stats line.
|
||||
- Android: add to `CallStats` and display in stats text
|
||||
- Desktop: add to `get_status` response and display in stats div
|
||||
|
||||
5. **Only in Auto mode**: Adaptive switching should only happen when the user selected "Auto". If they manually selected a profile, respect their choice.
|
||||
|
||||
### Phase 2: Extended Range (5-tier)
|
||||
|
||||
Extend `QualityAdapter::classify()` to use the full codec range:
|
||||
|
||||
| Condition | Profile | Codec |
|
||||
|-----------|---------|-------|
|
||||
| loss < 1% AND RTT < 30ms | STUDIO_64K | Opus 64k |
|
||||
| loss < 1% AND RTT < 50ms | STUDIO_48K | Opus 48k |
|
||||
| loss < 2% AND RTT < 80ms | STUDIO_32K | Opus 32k |
|
||||
| loss < 5% AND RTT < 100ms | GOOD | Opus 24k |
|
||||
| loss < 15% AND RTT < 200ms | DEGRADED | Opus 6k |
|
||||
| loss >= 15% OR RTT >= 200ms | CATASTROPHIC | Codec2 1.2k |
|
||||
|
||||
With hysteresis:
|
||||
- **Downgrade**: 3 consecutive reports (fast reaction to degradation)
|
||||
- **Upgrade**: 5 consecutive reports (slow, cautious improvement)
|
||||
- **Studio upgrade**: 10 consecutive reports (very conservative — avoid bouncing to 64k on brief good patches)
|
||||
|
||||
### Phase 3: Bandwidth Probing
|
||||
|
||||
Rather than relying solely on loss/RTT:
|
||||
1. Start at GOOD
|
||||
2. After 10 seconds of stable call, probe upward by switching to STUDIO_32K
|
||||
3. If no quality degradation after 5 seconds, probe to STUDIO_48K
|
||||
4. If degradation detected, immediately fall back
|
||||
5. This discovers the true available bandwidth rather than guessing from loss stats
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Android (`crates/wzp-android/src/engine.rs`)
|
||||
|
||||
```rust
|
||||
// In the recv loop, after decoding:
|
||||
if let Some(ref qr) = pkt.quality_report {
|
||||
quality_adapter.ingest(qr);
|
||||
}
|
||||
|
||||
// Periodic check (every 50 frames ≈ 1 second):
|
||||
if auto_profile && frames_decoded % 50 == 0 {
|
||||
if let Some(new_profile) = quality_adapter.should_switch(¤t_profile) {
|
||||
info!(from = ?current_profile.codec, to = ?new_profile.codec, "auto: switching quality");
|
||||
let _ = encoder_ref.lock().set_profile(new_profile);
|
||||
fec_enc_ref.lock() = create_encoder(&new_profile);
|
||||
current_profile = new_profile;
|
||||
frame_samples = frame_samples_for(&new_profile);
|
||||
// Resize capture buffer if needed
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Challenge**: The encoder is in the send task and the quality reports arrive in the recv task. Need shared state (AtomicU8 for profile index, or a channel).
|
||||
|
||||
**Recommended approach**: Use an `AtomicU8` that the recv task writes and the send task reads:
|
||||
```rust
|
||||
let pending_profile = Arc::new(AtomicU8::new(0xFF)); // 0xFF = no change
|
||||
|
||||
// Recv task: when adapter recommends switch
|
||||
pending_profile.store(new_profile_index, Ordering::Release);
|
||||
|
||||
// Send task: check at frame boundary
|
||||
let p = pending_profile.swap(0xFF, Ordering::Acquire);
|
||||
if p != 0xFF { /* apply switch */ }
|
||||
```
|
||||
|
||||
### Desktop (`desktop/src-tauri/src/engine.rs`)
|
||||
|
||||
Same pattern. The desktop engine already has separate send/recv tasks with shared atomics for mic_muted, etc. Add a `pending_profile: Arc<AtomicU8>` following the same pattern.
|
||||
|
||||
### Desktop CLI (`crates/wzp-client/src/call.rs`)
|
||||
|
||||
The `CallEncoder` already has `set_profile()`. The `CallDecoder` already auto-switches. Just need to:
|
||||
1. Add `QualityAdapter` to `CallDecoder`
|
||||
2. Feed quality reports in `ingest()`
|
||||
3. Check `should_switch()` in `decode_next()`
|
||||
4. Emit the recommendation via a callback or return value
|
||||
|
||||
## Testing
|
||||
|
||||
1. **Local test with tc/netem**: Use Linux traffic control to simulate loss/latency:
|
||||
```bash
|
||||
# Simulate 10% loss, 150ms RTT
|
||||
tc qdisc add dev lo root netem loss 10% delay 75ms
|
||||
# Run 2 clients in auto mode, verify they switch to DEGRADED
|
||||
```
|
||||
|
||||
2. **CLI test**: Run `wzp-client --profile auto` between two instances with simulated network conditions
|
||||
|
||||
3. **Relay quality reports**: Verify the relay actually sends QualityReport messages. If it doesn't yet, that needs to be implemented first (check relay code).
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Does the relay currently send QualityReports?** If not, Phase 1 is blocked until the relay implements per-client loss/RTT tracking and report generation. The relay sees all packets and can compute loss % per sender.
|
||||
|
||||
2. **Codec2 3.2k placement**: Should auto mode use Codec2 3.2k between DEGRADED and CATASTROPHIC? It's 20ms frames (lower latency than Opus 6k's 40ms) but speech-only quality.
|
||||
|
||||
3. **Cross-client adaptation**: If client A is on GOOD and client B auto-adapts to CATASTROPHIC, client A still sends Opus 24k. Client B can decode it fine (auto-switch on recv). But should A also be told to lower quality to save B's bandwidth? This requires signaling between clients.
|
||||
|
||||
## Milestones
|
||||
|
||||
| Phase | Scope | Effort | Dependency |
|
||||
|-------|-------|--------|------------|
|
||||
| 0 | Verify relay sends QualityReports | 0.5 day | None |
|
||||
| 1a | Wire QualityAdapter in Android engine | 1 day | Phase 0 |
|
||||
| 1b | Wire QualityAdapter in desktop engine | 1 day | Phase 0 |
|
||||
| 1c | UI indicator (current codec) | 0.5 day | Phase 1a/1b |
|
||||
| 2 | Extended 5-tier classification | 0.5 day | Phase 1 |
|
||||
| 3 | Bandwidth probing | 2 days | Phase 2 |
|
||||
170
docs/PRD-relay-federation.md
Normal file
170
docs/PRD-relay-federation.md
Normal file
@@ -0,0 +1,170 @@
|
||||
# PRD: Relay Federation (Multi-Relay Mesh)
|
||||
|
||||
## Problem
|
||||
|
||||
Currently all participants in a call must connect to the same relay. This creates:
|
||||
- **Single point of failure** — if the relay goes down, the entire call drops
|
||||
- **Geographic latency** — users far from the relay get high RTT
|
||||
- **Capacity limits** — one relay handles all traffic
|
||||
|
||||
Users should be able to connect to their nearest/preferred relay and still talk to users on other relays, as long as the relays are federated.
|
||||
|
||||
## Prerequisite: Fix Relay Identity Persistence
|
||||
|
||||
### Bug: TLS certificate regenerates on every restart
|
||||
|
||||
**Root cause:** `wzp-transport/src/config.rs:17` calls `rcgen::generate_simple_self_signed()` which creates a new keypair every time. The relay's Ed25519 identity seed IS persisted to `~/.wzp/relay-identity`, but the TLS certificate is not derived from it.
|
||||
|
||||
**Impact:** Clients see a different server fingerprint after every relay restart, triggering the "Server Key Changed" warning. This also breaks federation since relays identify each other by certificate fingerprint.
|
||||
|
||||
**Fix:** Derive the TLS certificate from the persisted relay seed:
|
||||
1. Add `server_config_from_seed(seed: &[u8; 32])` to `wzp-transport`
|
||||
2. Use the seed to create a deterministic keypair (e.g., derive an ECDSA key via HKDF from the Ed25519 seed)
|
||||
3. Generate a self-signed cert with that keypair — same seed = same cert = same fingerprint
|
||||
4. The relay passes its loaded seed to `server_config_from_seed()` instead of `server_config()`
|
||||
|
||||
**Effort:** 0.5 day
|
||||
|
||||
## Federation Design
|
||||
|
||||
### Core Concept
|
||||
|
||||
Two or more relays form a **federation mesh**. Each relay is an independent SFU. When relays are configured to trust each other, they bridge rooms with matching names — participants on relay A in room "podcast" hear participants on relay B in room "podcast" as if everyone were on the same relay.
|
||||
|
||||
### Configuration
|
||||
|
||||
Each relay reads a YAML config file (e.g., `~/.wzp/relay.yaml` or `--config relay.yaml`):
|
||||
|
||||
```yaml
|
||||
# Relay identity (auto-generated if missing)
|
||||
listen: 0.0.0.0:4433
|
||||
|
||||
# Federation peers — other relays we trust and bridge rooms with
|
||||
# Both sides must configure each other for federation to work
|
||||
peers:
|
||||
- url: "193.180.213.68:4433"
|
||||
fingerprint: "a5d6:e3c6:5ae7:185c:4eb1:af89:daed:4a43"
|
||||
label: "Pangolin EU"
|
||||
|
||||
- url: "10.0.0.5:4433"
|
||||
fingerprint: "7f2a:b391:0c44:..."
|
||||
label: "Office LAN"
|
||||
```
|
||||
|
||||
**Key rules:**
|
||||
- Both relays must configure each other — **mutual trust** required
|
||||
- A relay that receives a connection from an unknown peer logs: `"Relay a5d6:e3c6:... (193.180.213.68) wants to federate. To accept, add to peers config: url: 193.180.213.68:4433, fingerprint: a5d6:e3c6:..."`
|
||||
- Fingerprints are verified via the TLS certificate (requires the identity fix above)
|
||||
|
||||
### Protocol
|
||||
|
||||
#### Peer Connection
|
||||
|
||||
1. On startup, each relay attempts QUIC connections to all configured peers
|
||||
2. The connection uses SNI `"_federation"` (reserved room name prefix) to distinguish from client connections
|
||||
3. After QUIC handshake, verify the peer's certificate fingerprint matches the configured fingerprint
|
||||
4. If fingerprint mismatch → reject, log warning
|
||||
5. If peer connects but isn't in our config → log the helpful "add to config" message, reject
|
||||
|
||||
#### Room Bridging
|
||||
|
||||
Once two relays are connected:
|
||||
|
||||
1. **Room discovery**: When a local participant joins room "T", the relay sends a `FederationRoomJoin { room: "T" }` signal to all connected peers
|
||||
2. **Room leave**: When the last local participant leaves room "T", send `FederationRoomLeave { room: "T" }`
|
||||
3. **Media forwarding**: For each room that exists on both relays:
|
||||
- Relay A forwards all media packets from its local participants to relay B
|
||||
- Relay B forwards all media packets from its local participants to relay A
|
||||
- Each relay then fans out received federated media to its local participants (same as local SFU forwarding)
|
||||
4. **Participant presence**: `RoomUpdate` signals are merged — local participants + federated participants from all peers
|
||||
|
||||
```
|
||||
Relay A (2 local users) Relay B (1 local user)
|
||||
┌─────────────────────┐ ┌─────────────────────┐
|
||||
│ Room "T" │ │ Room "T" │
|
||||
│ Alice (local) ────┼──media──►│ Charlie (local) │
|
||||
│ Bob (local) ────┼──media──►│ │
|
||||
│ │◄──media──┼── Charlie │
|
||||
│ Charlie (federated)│ │ Alice (federated) │
|
||||
│ │ │ Bob (federated) │
|
||||
└─────────────────────┘ └─────────────────────┘
|
||||
```
|
||||
|
||||
#### Signal Messages (new)
|
||||
|
||||
```rust
|
||||
enum FederationSignal {
|
||||
/// A room exists on this relay with active participants
|
||||
RoomJoin { room: String, participants: Vec<ParticipantInfo> },
|
||||
/// Room is empty on this relay
|
||||
RoomLeave { room: String },
|
||||
/// Participant update for a federated room
|
||||
ParticipantUpdate { room: String, participants: Vec<ParticipantInfo> },
|
||||
}
|
||||
```
|
||||
|
||||
#### Media Forwarding
|
||||
|
||||
Federated media is forwarded as raw QUIC datagrams — the relay doesn't decode/re-encode. Each packet is prefixed with a room identifier so the receiving relay knows which room to fan it out to:
|
||||
|
||||
```
|
||||
[room_hash: 8 bytes][original_media_packet]
|
||||
```
|
||||
|
||||
The 8-byte room hash is computed once when the federation room bridge is established.
|
||||
|
||||
### What Relays DON'T Do
|
||||
|
||||
- **No transcoding** — media passes through as-is. If Alice sends Opus 64k, Charlie receives Opus 64k
|
||||
- **No re-encryption** — packets are already encrypted end-to-end between participants. Relays just forward opaque bytes
|
||||
- **No central coordinator** — each relay independently connects to its configured peers. No master/slave, no consensus protocol
|
||||
- **No automatic peer discovery** — peers must be explicitly configured in YAML
|
||||
|
||||
### Failure Handling
|
||||
|
||||
- If a peer relay goes down, the federation link drops. Local rooms continue to work. Federated participants disappear from presence.
|
||||
- Reconnection: attempt every 30 seconds with exponential backoff up to 5 minutes
|
||||
- If a peer relay restarts with a new identity (bug not fixed), the fingerprint check fails and federation is rejected with a clear error log
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 0: Fix Relay Identity (prerequisite)
|
||||
- Derive TLS cert from persisted seed
|
||||
- Same seed → same cert → same fingerprint across restarts
|
||||
|
||||
### Phase 1: YAML Config + Peer Connection
|
||||
- Add `--config relay.yaml` CLI flag
|
||||
- Parse peers config
|
||||
- On startup, connect to all configured peers via QUIC
|
||||
- Verify certificate fingerprints
|
||||
- Log helpful message for unconfigured peers
|
||||
- Reconnect on disconnect
|
||||
|
||||
### Phase 2: Room Bridging
|
||||
- Track which rooms exist on each peer
|
||||
- Forward media for shared rooms
|
||||
- Merge participant presence across peers
|
||||
- Handle room join/leave signals
|
||||
|
||||
### Phase 3: Resilience
|
||||
- Graceful handling of peer disconnect/reconnect
|
||||
- Don't duplicate packets if a participant is reachable via multiple paths
|
||||
- Rate limiting on federation links (prevent amplification)
|
||||
- Metrics: federated rooms, packets forwarded, peer latency
|
||||
|
||||
## Effort Estimates
|
||||
|
||||
| Phase | Scope | Effort |
|
||||
|-------|-------|--------|
|
||||
| 0 | Fix relay TLS identity from seed | 0.5 day |
|
||||
| 1 | YAML config + peer QUIC connections | 2 days |
|
||||
| 2 | Room bridging + media forwarding + presence merge | 3-4 days |
|
||||
| 3 | Resilience + metrics | 2 days |
|
||||
|
||||
## Non-Goals (v1)
|
||||
|
||||
- Automatic peer discovery (mDNS, DHT, etc.)
|
||||
- Cascading federation (relay A ↔ B ↔ C where A doesn't know C)
|
||||
- Load balancing across relays
|
||||
- Encryption between relays (QUIC provides transport encryption; e2e encryption between participants is orthogonal)
|
||||
- Different rooms on different relays (all federated rooms are bridged by name)
|
||||
394
docs/android/fix-audio-ring-desync.md
Normal file
394
docs/android/fix-audio-ring-desync.md
Normal file
@@ -0,0 +1,394 @@
|
||||
# Fix: AudioRing SPSC Buffer Cursor Desync
|
||||
|
||||
## Problem
|
||||
|
||||
A critical bug causes 10-16 seconds of bidirectional audio silence mid-call (~25-30s in). Both participants go silent at the exact same moment. The QUIC transport, relay, Opus codec, and FEC are all healthy — the bug is in the lock-free ring buffer that transfers decoded PCM from the Rust recv task to the Kotlin AudioTrack playout thread.
|
||||
|
||||
**Root cause:** `AudioRing::write()` modifies `read_pos` from the producer thread during overflow handling (lines 68-72 of `audio_ring.rs`). This violates the SPSC invariant — only the consumer should own `read_pos`. When both threads write to `read_pos`, a race corrupts the cursor state, causing the reader to see an empty or stale buffer for 12-16 seconds.
|
||||
|
||||
**Full forensics:** `debug/INCIDENT-2026-04-06-playout-ring-desync.md`
|
||||
|
||||
---
|
||||
|
||||
## Solution: Reader-Detects-Lap Architecture
|
||||
|
||||
The writer NEVER touches `read_pos`. On overflow, the writer simply overwrites old buffer data and advances `write_pos`. The reader detects it was lapped and self-corrects by snapping its own `read_pos` forward.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Rewrite `AudioRing`
|
||||
|
||||
**File:** `crates/wzp-android/src/audio_ring.rs`
|
||||
|
||||
Replace the entire implementation with:
|
||||
|
||||
**Constants:**
|
||||
```rust
|
||||
/// Ring buffer capacity — must be a power of 2 for bitmask indexing.
|
||||
/// 16384 samples = 341.3ms at 48kHz mono. Provides 70% more headroom
|
||||
/// than the previous 9600 (200ms) for surviving Android GC pauses.
|
||||
const RING_CAPACITY: usize = 16384; // 2^14
|
||||
const RING_MASK: usize = RING_CAPACITY - 1;
|
||||
```
|
||||
|
||||
**Struct:**
|
||||
```rust
|
||||
pub struct AudioRing {
|
||||
buf: Box<[i16; RING_CAPACITY]>,
|
||||
write_pos: AtomicUsize, // monotonically increasing, ONLY written by producer
|
||||
read_pos: AtomicUsize, // monotonically increasing, ONLY written by consumer
|
||||
overflow_count: AtomicU64, // incremented by reader when it detects a lap
|
||||
underrun_count: AtomicU64, // incremented by reader when ring is empty
|
||||
}
|
||||
```
|
||||
|
||||
**`write()` — producer. Does NOT touch `read_pos`:**
|
||||
```rust
|
||||
pub fn write(&self, samples: &[i16]) -> usize {
|
||||
let count = samples.len().min(RING_CAPACITY);
|
||||
let w = self.write_pos.load(Ordering::Relaxed);
|
||||
|
||||
for i in 0..count {
|
||||
unsafe {
|
||||
let ptr = self.buf.as_ptr() as *mut i16;
|
||||
*ptr.add((w + i) & RING_MASK) = samples[i];
|
||||
}
|
||||
}
|
||||
|
||||
self.write_pos.store(w.wrapping_add(count), Ordering::Release);
|
||||
count
|
||||
}
|
||||
```
|
||||
|
||||
**`read()` — consumer. Detects lap, self-corrects:**
|
||||
```rust
|
||||
pub fn read(&self, out: &mut [i16]) -> usize {
|
||||
let w = self.write_pos.load(Ordering::Acquire);
|
||||
let mut r = self.read_pos.load(Ordering::Relaxed);
|
||||
|
||||
let mut avail = w.wrapping_sub(r);
|
||||
|
||||
// Lap detection: writer has overwritten our unread data.
|
||||
// Snap read_pos forward to oldest valid data in the buffer.
|
||||
// Safe because we (the reader) are the sole owner of read_pos.
|
||||
if avail > RING_CAPACITY {
|
||||
r = w.wrapping_sub(RING_CAPACITY);
|
||||
avail = RING_CAPACITY;
|
||||
self.overflow_count.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
let count = out.len().min(avail);
|
||||
if count == 0 {
|
||||
if w == r {
|
||||
self.underrun_count.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
for i in 0..count {
|
||||
out[i] = unsafe { *self.buf.as_ptr().add((r + i) & RING_MASK) };
|
||||
}
|
||||
|
||||
self.read_pos.store(r.wrapping_add(count), Ordering::Release);
|
||||
count
|
||||
}
|
||||
```
|
||||
|
||||
**`available()` — clamped for external callers:**
|
||||
```rust
|
||||
pub fn available(&self) -> usize {
|
||||
let w = self.write_pos.load(Ordering::Acquire);
|
||||
let r = self.read_pos.load(Ordering::Relaxed);
|
||||
w.wrapping_sub(r).min(RING_CAPACITY)
|
||||
}
|
||||
```
|
||||
|
||||
**`free_space()` — keep for API compat:**
|
||||
```rust
|
||||
pub fn free_space(&self) -> usize {
|
||||
RING_CAPACITY.saturating_sub(self.available())
|
||||
}
|
||||
```
|
||||
|
||||
**Diagnostic accessors:**
|
||||
```rust
|
||||
pub fn overflow_count(&self) -> u64 {
|
||||
self.overflow_count.load(Ordering::Relaxed)
|
||||
}
|
||||
|
||||
pub fn underrun_count(&self) -> u64 {
|
||||
self.underrun_count.load(Ordering::Relaxed)
|
||||
}
|
||||
```
|
||||
|
||||
**Constructor:**
|
||||
```rust
|
||||
pub fn new() -> Self {
|
||||
debug_assert!(RING_CAPACITY.is_power_of_two());
|
||||
Self {
|
||||
buf: Box::new([0i16; RING_CAPACITY]),
|
||||
write_pos: AtomicUsize::new(0),
|
||||
read_pos: AtomicUsize::new(0),
|
||||
overflow_count: AtomicU64::new(0),
|
||||
underrun_count: AtomicU64::new(0),
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Imports to add:** `use std::sync::atomic::AtomicU64;`
|
||||
|
||||
**Safety comment update:**
|
||||
```rust
|
||||
// SAFETY: AudioRing is SPSC — one thread writes (producer), one reads (consumer).
|
||||
// The producer only writes write_pos. The consumer only writes read_pos.
|
||||
// Neither thread writes the other's cursor. Buffer indices are derived from
|
||||
// the owning thread's cursor, ensuring no concurrent access to the same index.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Add counter fields to `CallStats`
|
||||
|
||||
**File:** `crates/wzp-android/src/stats.rs`
|
||||
|
||||
Add three fields to the `CallStats` struct (after `fec_recovered`):
|
||||
|
||||
```rust
|
||||
/// Playout ring overflow count (reader was lapped by writer).
|
||||
pub playout_overflows: u64,
|
||||
/// Playout ring underrun count (reader found empty buffer).
|
||||
pub playout_underruns: u64,
|
||||
/// Capture ring overflow count.
|
||||
pub capture_overflows: u64,
|
||||
```
|
||||
|
||||
These derive `Default` (= 0) automatically via the existing `#[derive(Default)]`.
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Wire ring diagnostics into engine stats + logging
|
||||
|
||||
**File:** `crates/wzp-android/src/engine.rs`
|
||||
|
||||
**3a.** In `get_stats()` (~line 181), populate the new fields:
|
||||
|
||||
```rust
|
||||
stats.playout_overflows = self.state.playout_ring.overflow_count();
|
||||
stats.playout_underruns = self.state.playout_ring.underrun_count();
|
||||
stats.capture_overflows = self.state.capture_ring.overflow_count();
|
||||
```
|
||||
|
||||
**3b.** In the recv task periodic stats log, add ring health:
|
||||
|
||||
```rust
|
||||
info!(
|
||||
frames_decoded,
|
||||
fec_recovered,
|
||||
recv_errors,
|
||||
max_recv_gap_ms,
|
||||
playout_avail = state.playout_ring.available(),
|
||||
playout_overflows = state.playout_ring.overflow_count(),
|
||||
playout_underruns = state.playout_ring.underrun_count(),
|
||||
"recv stats"
|
||||
);
|
||||
```
|
||||
|
||||
**3c.** In the send task periodic stats log, add capture ring health:
|
||||
|
||||
```rust
|
||||
info!(
|
||||
seq = s,
|
||||
block_id,
|
||||
frames_sent,
|
||||
frames_dropped,
|
||||
send_errors,
|
||||
ring_avail = state.capture_ring.available(),
|
||||
capture_overflows = state.capture_ring.overflow_count(),
|
||||
"send stats"
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Parse new stats in Kotlin
|
||||
|
||||
**File:** `android/app/src/main/java/com/wzp/engine/CallStats.kt`
|
||||
|
||||
Add fields to the data class:
|
||||
|
||||
```kotlin
|
||||
val playoutOverflows: Long = 0,
|
||||
val playoutUnderruns: Long = 0,
|
||||
val captureOverflows: Long = 0,
|
||||
```
|
||||
|
||||
Add parsing in `fromJson()`:
|
||||
|
||||
```kotlin
|
||||
playoutOverflows = obj.optLong("playout_overflows", 0),
|
||||
playoutUnderruns = obj.optLong("playout_underruns", 0),
|
||||
captureOverflows = obj.optLong("capture_overflows", 0),
|
||||
```
|
||||
|
||||
No UI changes needed — these fields will appear in debug report JSON automatically.
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Unit tests
|
||||
|
||||
**File:** `crates/wzp-android/src/audio_ring.rs` — add `#[cfg(test)] mod tests`
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn capacity_is_power_of_two() {
|
||||
assert!(RING_CAPACITY.is_power_of_two());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn basic_write_read() {
|
||||
let ring = AudioRing::new();
|
||||
let input: Vec<i16> = (0..960).map(|i| i as i16).collect();
|
||||
ring.write(&input);
|
||||
assert_eq!(ring.available(), 960);
|
||||
|
||||
let mut output = vec![0i16; 960];
|
||||
let read = ring.read(&mut output);
|
||||
assert_eq!(read, 960);
|
||||
assert_eq!(output, input);
|
||||
assert_eq!(ring.available(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn wraparound() {
|
||||
let ring = AudioRing::new();
|
||||
let frame = vec![42i16; 960];
|
||||
// Write enough to wrap the buffer multiple times
|
||||
for _ in 0..20 {
|
||||
ring.write(&frame);
|
||||
let mut out = vec![0i16; 960];
|
||||
ring.read(&mut out);
|
||||
assert!(out.iter().all(|&s| s == 42));
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn overflow_detected_by_reader() {
|
||||
let ring = AudioRing::new();
|
||||
// Write more than RING_CAPACITY without reading
|
||||
let big = vec![7i16; RING_CAPACITY + 960];
|
||||
ring.write(&big[..RING_CAPACITY]);
|
||||
ring.write(&big[RING_CAPACITY..]);
|
||||
|
||||
// Reader should detect lap
|
||||
let mut out = vec![0i16; 960];
|
||||
let read = ring.read(&mut out);
|
||||
assert!(read > 0);
|
||||
assert_eq!(ring.overflow_count(), 1);
|
||||
// Data should be from the most recent writes
|
||||
assert!(out.iter().all(|&s| s == 7));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn writer_never_modifies_read_pos() {
|
||||
let ring = AudioRing::new();
|
||||
// Read pos should stay at 0 until read() is called
|
||||
let data = vec![1i16; RING_CAPACITY + 960];
|
||||
ring.write(&data);
|
||||
// read_pos is private, but we can check available() > CAPACITY
|
||||
// which proves write() didn't advance read_pos
|
||||
let w = ring.write_pos.load(std::sync::atomic::Ordering::Relaxed);
|
||||
let r = ring.read_pos.load(std::sync::atomic::Ordering::Relaxed);
|
||||
assert_eq!(r, 0, "write() must not modify read_pos");
|
||||
assert!(w.wrapping_sub(r) > RING_CAPACITY);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn underrun_counted() {
|
||||
let ring = AudioRing::new();
|
||||
let mut out = vec![0i16; 960];
|
||||
let read = ring.read(&mut out);
|
||||
assert_eq!(read, 0);
|
||||
assert_eq!(ring.underrun_count(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn overflow_recovery_reads_recent_data() {
|
||||
let ring = AudioRing::new();
|
||||
// Fill with old data
|
||||
let old = vec![1i16; RING_CAPACITY];
|
||||
ring.write(&old);
|
||||
// Overwrite with new data (lapping the reader)
|
||||
let new_data = vec![99i16; 960];
|
||||
ring.write(&new_data);
|
||||
|
||||
// Reader should snap forward and get recent data
|
||||
let mut out = vec![0i16; RING_CAPACITY];
|
||||
let read = ring.read(&mut out);
|
||||
assert_eq!(read, RING_CAPACITY);
|
||||
// The last 960 samples should be 99
|
||||
assert!(out[RING_CAPACITY - 960..].iter().all(|&s| s == 99));
|
||||
assert_eq!(ring.overflow_count(), 1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Memory Ordering Reference
|
||||
|
||||
| Operation | Ordering | Rationale |
|
||||
|-----------|----------|-----------|
|
||||
| `write_pos.store` in `write()` | Release | Buffer writes visible before cursor advances |
|
||||
| `write_pos.load` in `read()` | Acquire | Pairs with Release above — sees all buffer writes |
|
||||
| `write_pos.load` in `write()` | Relaxed | Writer is sole owner of write_pos |
|
||||
| `read_pos.load` in `read()` | Relaxed | Reader is sole owner of read_pos |
|
||||
| `read_pos.store` in `read()` | Release | Makes available() consistent from any thread |
|
||||
| `read_pos.load` in `available()` | Relaxed | Informational only, slight staleness OK |
|
||||
| All counters | Relaxed | Diagnostic only |
|
||||
|
||||
---
|
||||
|
||||
## Capacity Tradeoff
|
||||
|
||||
| Capacity | Duration | Memory | Verdict |
|
||||
|----------|----------|--------|---------|
|
||||
| 8192 (2^13) | 170ms | 16KB | Less than current 200ms — risky |
|
||||
| **16384 (2^14)** | **341ms** | **32KB** | **70% more headroom, bitmask indexing** |
|
||||
| 32768 (2^15) | 682ms | 64KB | Excessive latency on overflow recovery |
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. `cargo test -p wzp-android` — new unit tests pass
|
||||
2. `cargo ndk -t arm64-v8a build --release -p wzp-android` — ARM cross-compile succeeds
|
||||
3. Build APK, install on both test devices (Nothing A059 + Pixel 6)
|
||||
4. 2+ minute call — verify no audio gaps
|
||||
5. Check debug report JSON: `playout_overflows` should be 0 or very small
|
||||
6. Check logcat `wzp_android` tag: send/recv stats show healthy ring state
|
||||
7. Stress test: play music through one device speaker while on call — forces high ring throughput
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify
|
||||
|
||||
| File | What changes |
|
||||
|------|-------------|
|
||||
| `crates/wzp-android/src/audio_ring.rs` | Complete rewrite — the core fix |
|
||||
| `crates/wzp-android/src/stats.rs` | Add 3 counter fields |
|
||||
| `crates/wzp-android/src/engine.rs` | Wire counters into get_stats() + periodic logs |
|
||||
| `android/app/src/main/java/com/wzp/engine/CallStats.kt` | Parse 3 new JSON fields |
|
||||
|
||||
## What Does NOT Change
|
||||
|
||||
- `AudioPipeline.kt` — calls `readAudio()`/`writeAudio()` unchanged; ring fix is transparent
|
||||
- `jni_bridge.rs` — JNI bridge passes through unchanged
|
||||
- `audio_android.rs` — separate Oboe-based ring, currently unused, different design
|
||||
- Relay code — relay is confirmed healthy
|
||||
- Desktop client — uses `Mutex + mpsc`, not `AudioRing`
|
||||
149
docs/android/fix-capture-thread-crash.md
Normal file
149
docs/android/fix-capture-thread-crash.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Fix: Capture/Playout Thread Use-After-Free on Hangup
|
||||
|
||||
## Problem
|
||||
|
||||
App crashes (SIGSEGV) when hanging up a call. The capture thread (`wzp-capture`) calls `engine.writeAudio()` via JNI after `teardown()` has freed the native engine handle. Same race exists for the playout thread's `readAudio()`.
|
||||
|
||||
**Root cause:** TOCTOU race between the `nativeHandle == 0L` check in `WzpEngine.writeAudio()`/`readAudio()` and `destroy()` freeing the native memory on the ViewModel thread. Audio threads can't be joined (libcrypto TLS destructor crash), so there's no synchronization between `stopAudio()` and `destroy()`.
|
||||
|
||||
**Full forensics:** `debug/INCIDENT-2026-04-06-capture-thread-use-after-free.md`
|
||||
|
||||
---
|
||||
|
||||
## Solution: Destroy Latch
|
||||
|
||||
Add a `CountDownLatch(2)` that both audio threads count down after exiting their loops. `teardown()` awaits the latch (with timeout) before calling `destroy()`, guaranteeing no in-flight JNI calls.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Add a drain latch to `AudioPipeline`
|
||||
|
||||
**File:** `android/app/src/main/java/com/wzp/audio/AudioPipeline.kt`
|
||||
|
||||
Add a `CountDownLatch` field:
|
||||
|
||||
```kotlin
|
||||
import java.util.concurrent.CountDownLatch
|
||||
import java.util.concurrent.TimeUnit
|
||||
|
||||
class AudioPipeline(private val context: Context) {
|
||||
// ... existing fields ...
|
||||
|
||||
/** Latch counted down by each audio thread after exiting its loop.
|
||||
* stop() does NOT wait on this — teardown waits via awaitDrain(). */
|
||||
private var drainLatch: CountDownLatch? = null
|
||||
```
|
||||
|
||||
In `start()`, create the latch before spawning threads:
|
||||
|
||||
```kotlin
|
||||
fun start(engine: WzpEngine) {
|
||||
if (running) return
|
||||
running = true
|
||||
drainLatch = CountDownLatch(2) // one for capture, one for playout
|
||||
|
||||
captureThread = Thread({
|
||||
runCapture(engine)
|
||||
drainLatch?.countDown() // signal: capture loop exited
|
||||
parkThread()
|
||||
}, "wzp-capture").apply { ... }
|
||||
|
||||
playoutThread = Thread({
|
||||
runPlayout(engine)
|
||||
drainLatch?.countDown() // signal: playout loop exited
|
||||
parkThread()
|
||||
}, "wzp-playout").apply { ... }
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
Add `awaitDrain()` — called by ViewModel before `destroy()`:
|
||||
|
||||
```kotlin
|
||||
/** Block until both audio threads have exited their loops (max 200ms).
|
||||
* After this returns, no more JNI calls to the engine will be made. */
|
||||
fun awaitDrain(): Boolean {
|
||||
return drainLatch?.await(200, TimeUnit.MILLISECONDS) ?: true
|
||||
}
|
||||
```
|
||||
|
||||
`stop()` remains unchanged (non-blocking, sets `running = false`).
|
||||
|
||||
### Step 2: Update `CallViewModel.teardown()` to await drain
|
||||
|
||||
**File:** `android/app/src/main/java/com/wzp/ui/call/CallViewModel.kt`
|
||||
|
||||
Change teardown to wait for audio threads before destroying:
|
||||
|
||||
```kotlin
|
||||
private fun teardown(stopService: Boolean = true) {
|
||||
Log.i(TAG, "teardown: stopping audio, stopService=$stopService")
|
||||
val hadCall = audioStarted
|
||||
CallService.onStopFromNotification = null
|
||||
stopAudio() // sets running=false (non-blocking)
|
||||
stopStatsPolling()
|
||||
|
||||
// Wait for audio threads to exit their loops before destroying the engine.
|
||||
// This guarantees no in-flight JNI calls to writeAudio/readAudio.
|
||||
val drained = audioPipeline?.awaitDrain() ?: true
|
||||
if (!drained) {
|
||||
Log.w(TAG, "teardown: audio threads did not drain in time")
|
||||
}
|
||||
audioPipeline = null
|
||||
|
||||
Log.i(TAG, "teardown: stopping engine")
|
||||
try { engine?.stopCall() } catch (e: Exception) { Log.w(TAG, "stopCall err: $e") }
|
||||
try { engine?.destroy() } catch (e: Exception) { Log.w(TAG, "destroy err: $e") }
|
||||
engine = null
|
||||
engineInitialized = false
|
||||
// ... rest unchanged
|
||||
}
|
||||
```
|
||||
|
||||
**Key change:** `awaitDrain()` is called AFTER `stopAudio()` (which sets `running=false`) but BEFORE `engine?.destroy()`. The latch guarantees both threads have exited their `while(running)` loops and will never call `writeAudio`/`readAudio` again.
|
||||
|
||||
Also move `audioPipeline = null` to after `awaitDrain()` to keep the reference alive for the latch call.
|
||||
|
||||
### Step 3: Move `stopAudio()` pipeline nulling
|
||||
|
||||
**File:** `android/app/src/main/java/com/wzp/ui/call/CallViewModel.kt`
|
||||
|
||||
In `stopAudio()`, do NOT null out the pipeline — let `teardown()` handle it after drain:
|
||||
|
||||
```kotlin
|
||||
private fun stopAudio() {
|
||||
if (!audioStarted) return
|
||||
audioPipeline?.stop() // sets running=false
|
||||
// DON'T null audioPipeline here — teardown() needs it for awaitDrain()
|
||||
audioRouteManager?.unregister()
|
||||
audioRouteManager?.setSpeaker(false)
|
||||
_isSpeaker.value = false
|
||||
audioStarted = false
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify
|
||||
|
||||
| File | What changes |
|
||||
|------|-------------|
|
||||
| `android/.../audio/AudioPipeline.kt` | Add `CountDownLatch`, `countDown()` in threads, `awaitDrain()` method |
|
||||
| `android/.../ui/call/CallViewModel.kt` | `teardown()` calls `awaitDrain()` before `destroy()`; `stopAudio()` doesn't null pipeline |
|
||||
|
||||
## What Does NOT Change
|
||||
|
||||
- `WzpEngine.kt` — the `nativeHandle == 0L` guard stays as defense-in-depth
|
||||
- `jni_bridge.rs` — `panic::catch_unwind` stays as last resort
|
||||
- `AudioPipeline.stop()` — remains non-blocking
|
||||
- Thread parking — still needed to avoid libcrypto TLS crash
|
||||
|
||||
## Verification
|
||||
|
||||
1. Build APK, install on test device
|
||||
2. Make a call, hang up — verify no crash in logcat (`adb logcat -s AndroidRuntime:E DEBUG:F`)
|
||||
3. Rapid call/hangup/call/hangup cycles — stress the teardown path
|
||||
4. Check logcat for `teardown: audio threads did not drain in time` — should never appear under normal conditions
|
||||
5. Verify debug report still works after hangup (latch doesn't interfere with report collection)
|
||||
75
scripts/Dockerfile.android-builder
Normal file
75
scripts/Dockerfile.android-builder
Normal file
@@ -0,0 +1,75 @@
|
||||
# =============================================================================
|
||||
# WZ Phone — Android build environment (Debian 12 / Bookworm)
|
||||
#
|
||||
# Matches the bare-metal build-android.sh environment:
|
||||
# - Debian 12 (cmake 3.25, no Android cross-compilation bugs)
|
||||
# - JDK 17 (Gradle 8.5 + AGP 8.2.0 compatible)
|
||||
# - NDK 26.1 (last stable before scudo/MTE crash on NDK 27+)
|
||||
# - Rust stable with aarch64-linux-android target + cargo-ndk
|
||||
#
|
||||
# Build: docker build -t wzp-android-builder -f Dockerfile.android-builder .
|
||||
# =============================================================================
|
||||
FROM debian:bookworm
|
||||
|
||||
ARG NDK_VERSION=26.1.10909125
|
||||
ARG ANDROID_API=34
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
ANDROID_HOME=/opt/android-sdk \
|
||||
JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
|
||||
|
||||
ENV ANDROID_NDK_HOME=$ANDROID_HOME/ndk/$NDK_VERSION \
|
||||
ANDROID_NDK=$ANDROID_HOME/ndk/$NDK_VERSION
|
||||
|
||||
# ── System packages ──────────────────────────────────────────────────────────
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
cmake \
|
||||
curl \
|
||||
git \
|
||||
libssl-dev \
|
||||
pkg-config \
|
||||
unzip \
|
||||
wget \
|
||||
zip \
|
||||
openjdk-17-jdk-headless \
|
||||
ca-certificates \
|
||||
libasound2-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# ── Android SDK + NDK 26.1 ──────────────────────────────────────────────────
|
||||
RUN mkdir -p $ANDROID_HOME/cmdline-tools \
|
||||
&& cd /tmp \
|
||||
&& wget -q https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip -O cmdtools.zip \
|
||||
&& unzip -qo cmdtools.zip -d $ANDROID_HOME/cmdline-tools \
|
||||
&& mv $ANDROID_HOME/cmdline-tools/cmdline-tools $ANDROID_HOME/cmdline-tools/latest \
|
||||
&& rm cmdtools.zip
|
||||
|
||||
RUN yes | $ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager --licenses > /dev/null 2>&1 \
|
||||
&& $ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager --install \
|
||||
"platforms;android-${ANDROID_API}" \
|
||||
"build-tools;${ANDROID_API}.0.0" \
|
||||
"ndk;${NDK_VERSION}" \
|
||||
"platform-tools" \
|
||||
2>&1 | grep -v '^\[' > /dev/null
|
||||
|
||||
# Make SDK world-readable so builder user can access it
|
||||
RUN chmod -R a+rX $ANDROID_HOME
|
||||
|
||||
# ── Builder user (1000:1000) ─────────────────────────────────────────────────
|
||||
RUN groupadd -g 1000 builder \
|
||||
&& useradd -m -u 1000 -g 1000 -s /bin/bash builder
|
||||
|
||||
USER builder
|
||||
WORKDIR /home/builder
|
||||
|
||||
# ── Rust toolchain ───────────────────────────────────────────────────────────
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs \
|
||||
| sh -s -- -y --default-toolchain stable \
|
||||
&& . $HOME/.cargo/env \
|
||||
&& rustup target add aarch64-linux-android \
|
||||
&& cargo install cargo-ndk
|
||||
|
||||
ENV PATH="/home/builder/.cargo/bin:$ANDROID_HOME/cmdline-tools/latest/bin:$ANDROID_HOME/platform-tools:$JAVA_HOME/bin:$PATH"
|
||||
|
||||
WORKDIR /build/source
|
||||
159
scripts/build-and-notify.sh
Executable file
159
scripts/build-and-notify.sh
Executable file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Build Android APK via Docker on SepehrHomeserverdk, upload to rustypaste,
|
||||
# notify via ntfy.sh/wzp. Fire and forget.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/build-and-notify.sh Build + upload + notify
|
||||
# ./scripts/build-and-notify.sh --rust Force Rust rebuild
|
||||
# ./scripts/build-and-notify.sh --pull Git pull before building
|
||||
# ./scripts/build-and-notify.sh --install Also download + adb install locally
|
||||
|
||||
REMOTE_HOST="SepehrHomeserverdk"
|
||||
BASE_DIR="/mnt/storage/manBuilder"
|
||||
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||
LOCAL_OUTPUT="target/android-apk"
|
||||
SSH_OPTS="-o ConnectTimeout=15 -o ServerAliveInterval=15 -o ServerAliveCountMax=4 -o LogLevel=ERROR"
|
||||
|
||||
REBUILD_RUST=0
|
||||
DO_PULL=0
|
||||
DO_INSTALL=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--rust) REBUILD_RUST=1 ;;
|
||||
--pull) DO_PULL=1 ;;
|
||||
--install) DO_INSTALL=1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||
|
||||
ssh_cmd() { ssh -A $SSH_OPTS "$REMOTE_HOST" "$@"; }
|
||||
|
||||
# Upload the remote build script
|
||||
log "Uploading build script to remote..."
|
||||
ssh_cmd "cat > /tmp/wzp-docker-build.sh" <<'REMOTE_SCRIPT'
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
BASE_DIR="/mnt/storage/manBuilder"
|
||||
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||
REBUILD_RUST="${1:-0}"
|
||||
DO_PULL="${2:-0}"
|
||||
|
||||
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||
|
||||
trap 'notify "WZP Android build FAILED! Check /tmp/wzp-build.log"' ERR
|
||||
|
||||
# Pull if requested
|
||||
if [ "$DO_PULL" = "1" ]; then
|
||||
echo ">>> Pulling latest..."
|
||||
cd "$BASE_DIR/data/source"
|
||||
git checkout -- . 2>/dev/null || true
|
||||
git pull origin feat/android-voip-client 2>&1 | tail -3
|
||||
fi
|
||||
|
||||
# Clean Rust if requested
|
||||
if [ "$REBUILD_RUST" = "1" ]; then
|
||||
echo ">>> Cleaning Rust target..."
|
||||
rm -rf "$BASE_DIR/data/cache/target/aarch64-linux-android/release"
|
||||
fi
|
||||
|
||||
# Fix perms
|
||||
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache" \
|
||||
! -user 1000 -o ! -group 1000 2>/dev/null | \
|
||||
xargs -r chown 1000:1000 2>/dev/null || true
|
||||
|
||||
# Clean jniLibs
|
||||
rm -rf "$BASE_DIR/data/source/android/app/src/main/jniLibs/arm64-v8a"
|
||||
|
||||
notify "WZP build started..."
|
||||
|
||||
echo ">>> Building in Docker..."
|
||||
docker run --rm --user 1000:1000 \
|
||||
-v "$BASE_DIR/data/source:/build/source" \
|
||||
-v "$BASE_DIR/data/cache/cargo-registry:/home/builder/.cargo/registry" \
|
||||
-v "$BASE_DIR/data/cache/cargo-git:/home/builder/.cargo/git" \
|
||||
-v "$BASE_DIR/data/cache/target:/build/source/target" \
|
||||
-v "$BASE_DIR/data/cache/gradle:/home/builder/.gradle" \
|
||||
wzp-android-builder bash -c '
|
||||
set -euo pipefail
|
||||
cd /build/source
|
||||
|
||||
echo ">>> Rust build..."
|
||||
cargo ndk -t arm64-v8a -o android/app/src/main/jniLibs build --release -p wzp-android 2>&1 | tail -5
|
||||
|
||||
echo ">>> Checking .so files..."
|
||||
# cargo-ndk may not copy libc++_shared.so — grab it from the NDK if missing
|
||||
if [ ! -f android/app/src/main/jniLibs/arm64-v8a/libc++_shared.so ]; then
|
||||
echo ">>> libc++_shared.so missing, copying from NDK..."
|
||||
NDK_LIBCXX=$(find "$ANDROID_NDK_HOME" -name "libc++_shared.so" -path "*/aarch64-linux-android/*" | head -1)
|
||||
if [ -n "$NDK_LIBCXX" ]; then
|
||||
cp "$NDK_LIBCXX" android/app/src/main/jniLibs/arm64-v8a/
|
||||
echo "Copied from: $NDK_LIBCXX"
|
||||
else
|
||||
echo "WARNING: libc++_shared.so not found in NDK, APK may crash at runtime"
|
||||
fi
|
||||
fi
|
||||
ls -lh android/app/src/main/jniLibs/arm64-v8a/
|
||||
[ -f android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ] || { echo "ERROR: libwzp_android.so missing!"; exit 1; }
|
||||
|
||||
echo ">>> APK build..."
|
||||
cd android && chmod +x gradlew
|
||||
./gradlew clean assembleDebug --no-daemon --warning-mode=none 2>&1 | tail -3
|
||||
echo "APK_BUILT"
|
||||
'
|
||||
|
||||
# Upload to rustypaste
|
||||
echo ">>> Uploading to rustypaste..."
|
||||
source "$BASE_DIR/.env"
|
||||
APK=$(find "$BASE_DIR/data/source/android" -name "app-debug*.apk" -path "*/outputs/apk/*" | head -1)
|
||||
if [ -n "$APK" ]; then
|
||||
URL=$(curl -s -F "file=@$APK" -H "Authorization: $rusty_auth_token" "$rusty_address")
|
||||
echo "UPLOAD_URL=$URL"
|
||||
notify "WZP build done! APK: $URL"
|
||||
echo ">>> Done! APK at: $URL"
|
||||
else
|
||||
notify "WZP build FAILED - no APK"
|
||||
echo "ERROR: No APK found"
|
||||
exit 1
|
||||
fi
|
||||
REMOTE_SCRIPT
|
||||
|
||||
ssh_cmd "chmod +x /tmp/wzp-docker-build.sh"
|
||||
|
||||
# Run in tmux
|
||||
log "Starting build in tmux..."
|
||||
ssh_cmd "tmux kill-session -t wzp-build 2>/dev/null; true"
|
||||
ssh_cmd "tmux new-session -d -s wzp-build '/tmp/wzp-docker-build.sh $REBUILD_RUST $DO_PULL 2>&1 | tee /tmp/wzp-build.log'"
|
||||
|
||||
log "Build running! You'll get a notification on ntfy.sh/wzp with the download URL."
|
||||
echo ""
|
||||
echo " Monitor: ssh $REMOTE_HOST 'tail -f /tmp/wzp-build.log'"
|
||||
echo " Status: ssh $REMOTE_HOST 'tail -5 /tmp/wzp-build.log'"
|
||||
echo ""
|
||||
|
||||
# Optionally wait and install locally
|
||||
if [ "$DO_INSTALL" = "1" ]; then
|
||||
log "Waiting for build to finish..."
|
||||
while true; do
|
||||
sleep 15
|
||||
if ssh_cmd "grep -q 'UPLOAD_URL\|ERROR' /tmp/wzp-build.log 2>/dev/null"; then
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
URL=$(ssh_cmd "grep UPLOAD_URL /tmp/wzp-build.log | tail -1 | cut -d= -f2")
|
||||
if [ -n "$URL" ]; then
|
||||
log "Downloading APK..."
|
||||
mkdir -p "$LOCAL_OUTPUT"
|
||||
curl -s -o "$LOCAL_OUTPUT/wzp-debug.apk" "$URL"
|
||||
log "Installing..."
|
||||
adb uninstall com.wzp.phone 2>/dev/null || true
|
||||
adb install "$LOCAL_OUTPUT/wzp-debug.apk"
|
||||
log "Done!"
|
||||
else
|
||||
err "Build failed"
|
||||
fi
|
||||
fi
|
||||
376
scripts/build-android-cloud.sh
Executable file
376
scripts/build-android-cloud.sh
Executable file
@@ -0,0 +1,376 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Build WarzonePhone Android APK using a temporary Hetzner Cloud VPS.
|
||||
# Creates a VM, builds both debug and release APKs, downloads them, destroys the VM.
|
||||
#
|
||||
# Prerequisites: hcloud CLI authenticated, SSH key "wz" registered.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/build-android-cloud.sh Full build (create → build → download → destroy)
|
||||
# ./scripts/build-android-cloud.sh --prepare Create VM and install deps only
|
||||
# ./scripts/build-android-cloud.sh --build Build on existing VM
|
||||
# ./scripts/build-android-cloud.sh --transfer Download APKs from VM
|
||||
# ./scripts/build-android-cloud.sh --destroy Delete the VM
|
||||
# ./scripts/build-android-cloud.sh --all prepare + build + transfer (VM persists)
|
||||
# ./scripts/build-android-cloud.sh --upload Re-upload source to existing VM
|
||||
#
|
||||
# Environment variables (all optional):
|
||||
# WZP_BRANCH Branch to build (default: feat/android-voip-client)
|
||||
# WZP_SERVER_TYPE Hetzner server type (default: cx32 — 4 vCPU, 8GB RAM)
|
||||
# WZP_KEEP_VM Set to 1 to skip destroy on full build
|
||||
|
||||
SSH_KEY_NAME="wz"
|
||||
SSH_KEY_PATH="/Users/manwe/CascadeProjects/wzp"
|
||||
SERVER_TYPE="${WZP_SERVER_TYPE:-cx33}"
|
||||
IMAGE="ubuntu-24.04"
|
||||
SERVER_NAME="wzp-android-builder"
|
||||
REMOTE_USER="root"
|
||||
OUTPUT_DIR="target/android-apk"
|
||||
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
||||
BRANCH="${WZP_BRANCH:-feat/android-voip-client}"
|
||||
KEEP_VM="${WZP_KEEP_VM:-0}"
|
||||
|
||||
SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=10 -o LogLevel=ERROR"
|
||||
|
||||
# NDK 26.1 — NDK 27 crashes scudo on Android 16 MTE devices
|
||||
NDK_VERSION="26.1.10909125"
|
||||
ANDROID_API="34"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
log() { echo -e "\n\033[1;36m>>> $*\033[0m"; }
|
||||
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||
die() { err "$@"; do_destroy_quiet; exit 1; }
|
||||
|
||||
get_vm_ip() {
|
||||
hcloud server list -o columns=name,ipv4 -o noheader 2>/dev/null | grep "$SERVER_NAME" | awk '{print $2}' | tr -d ' '
|
||||
}
|
||||
|
||||
ssh_cmd() {
|
||||
local ip
|
||||
ip=$(get_vm_ip)
|
||||
[ -n "$ip" ] || die "No VM found. Run --prepare first."
|
||||
ssh $SSH_OPTS -A -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip" "$@"
|
||||
}
|
||||
|
||||
scp_down() {
|
||||
local ip
|
||||
ip=$(get_vm_ip)
|
||||
[ -n "$ip" ] || die "No VM found."
|
||||
scp $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip:$1" "$2"
|
||||
}
|
||||
|
||||
do_destroy_quiet() {
|
||||
local name
|
||||
name=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||
if [ -n "$name" ]; then
|
||||
echo ""
|
||||
err "Cleaning up — destroying VM $name"
|
||||
hcloud server delete "$name" 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --prepare: Create VM, install all build dependencies
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_prepare() {
|
||||
# Check if VM already exists
|
||||
local existing
|
||||
existing=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||
if [ -n "$existing" ]; then
|
||||
log "VM already exists: $existing — reusing"
|
||||
do_upload
|
||||
return
|
||||
fi
|
||||
|
||||
log "Creating Hetzner VM ($SERVER_TYPE, $IMAGE)..."
|
||||
hcloud server create \
|
||||
--name "$SERVER_NAME" \
|
||||
--type "$SERVER_TYPE" \
|
||||
--image "$IMAGE" \
|
||||
--ssh-key "$SSH_KEY_NAME" \
|
||||
--location fsn1 \
|
||||
--quiet \
|
||||
|| die "Failed to create VM"
|
||||
|
||||
local ip
|
||||
ip=$(get_vm_ip)
|
||||
[ -n "$ip" ] || die "VM created but no IP found"
|
||||
echo " VM: $SERVER_NAME @ $ip"
|
||||
|
||||
# Wait for SSH
|
||||
log "Waiting for SSH..."
|
||||
local ok=0
|
||||
for i in $(seq 1 30); do
|
||||
if ssh $SSH_OPTS -i "$SSH_KEY_PATH" "$REMOTE_USER@$ip" "echo ok" &>/dev/null; then
|
||||
ok=1
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
[ "$ok" -eq 1 ] || die "SSH timeout after 60s"
|
||||
|
||||
# System packages
|
||||
log "Installing system packages (cmake, JDK 17, build tools)..."
|
||||
ssh_cmd "export DEBIAN_FRONTEND=noninteractive && \
|
||||
apt-get update -qq && \
|
||||
apt-get install -y -qq \
|
||||
build-essential cmake curl git libssl-dev pkg-config \
|
||||
unzip wget zip openjdk-17-jdk-headless \
|
||||
> /dev/null 2>&1" \
|
||||
|| die "Failed to install system packages"
|
||||
|
||||
# Verify cmake version (must be <= 3.30)
|
||||
local cmake_ver
|
||||
cmake_ver=$(ssh_cmd "cmake --version | head -1")
|
||||
echo " cmake: $cmake_ver"
|
||||
echo " java: $(ssh_cmd "java -version 2>&1 | head -1")"
|
||||
|
||||
# Rust
|
||||
log "Installing Rust toolchain..."
|
||||
ssh_cmd "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable > /dev/null 2>&1" \
|
||||
|| die "Failed to install Rust"
|
||||
ssh_cmd "source \$HOME/.cargo/env && rustup target add aarch64-linux-android > /dev/null 2>&1"
|
||||
ssh_cmd "source \$HOME/.cargo/env && cargo install cargo-ndk > /dev/null 2>&1" \
|
||||
|| die "Failed to install cargo-ndk"
|
||||
echo " rust: $(ssh_cmd "source \$HOME/.cargo/env && rustc --version")"
|
||||
|
||||
# Android SDK + NDK
|
||||
log "Installing Android SDK + NDK $NDK_VERSION..."
|
||||
ssh_cmd "export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 && \
|
||||
mkdir -p \$HOME/android-sdk/cmdline-tools && \
|
||||
cd /tmp && \
|
||||
wget -q https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip -O cmdtools.zip && \
|
||||
unzip -qo cmdtools.zip -d \$HOME/android-sdk/cmdline-tools && \
|
||||
mv \$HOME/android-sdk/cmdline-tools/cmdline-tools \$HOME/android-sdk/cmdline-tools/latest 2>/dev/null; \
|
||||
yes | \$HOME/android-sdk/cmdline-tools/latest/bin/sdkmanager --licenses > /dev/null 2>&1; \
|
||||
\$HOME/android-sdk/cmdline-tools/latest/bin/sdkmanager --install \
|
||||
'platforms;android-${ANDROID_API}' \
|
||||
'build-tools;${ANDROID_API}.0.0' \
|
||||
'ndk;${NDK_VERSION}' \
|
||||
'platform-tools' \
|
||||
2>&1 | grep -v '^\[' > /dev/null" \
|
||||
|| die "Failed to install Android SDK/NDK"
|
||||
|
||||
ssh_cmd "[ -d \$HOME/android-sdk/ndk/$NDK_VERSION ]" \
|
||||
|| die "NDK not found after install"
|
||||
echo " NDK: $NDK_VERSION"
|
||||
|
||||
# Upload source
|
||||
do_upload
|
||||
|
||||
log "VM ready!"
|
||||
echo " IP: $ip"
|
||||
echo " SSH: ssh -A -i $SSH_KEY_PATH root@$ip"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --upload: Upload source code to VM
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_upload() {
|
||||
log "Uploading source code (rsync)..."
|
||||
local ip
|
||||
ip=$(get_vm_ip)
|
||||
[ -n "$ip" ] || die "No VM found."
|
||||
rsync -az --delete \
|
||||
--exclude='target' \
|
||||
--exclude='.git' \
|
||||
--exclude='.claude' \
|
||||
--exclude='node_modules' \
|
||||
--exclude='dist' \
|
||||
--exclude='desktop/src-tauri/gen' \
|
||||
-e "ssh $SSH_OPTS -i $SSH_KEY_PATH" \
|
||||
"$PROJECT_DIR/" "$REMOTE_USER@$ip:/root/wzp-build/"
|
||||
echo " Source uploaded."
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --build: Build native .so + debug & release APKs
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_build() {
|
||||
log "Building Rust native library (arm64-v8a, release)..."
|
||||
|
||||
# Clean Rust release target to force full rebuild.
|
||||
# cargo-ndk only copies libc++_shared.so when it actually links — a partial
|
||||
# clean that skips relinking leaves libc++_shared.so missing from jniLibs.
|
||||
ssh_cmd "rm -rf /root/wzp-build/target/aarch64-linux-android/release \
|
||||
/root/wzp-build/android/app/src/main/jniLibs/arm64-v8a"
|
||||
|
||||
# ANDROID_NDK must be set (not just ANDROID_NDK_HOME) — cmake checks it
|
||||
ssh_cmd "source \$HOME/.cargo/env && \
|
||||
export ANDROID_HOME=\$HOME/android-sdk && \
|
||||
export ANDROID_NDK_HOME=\$ANDROID_HOME/ndk/$NDK_VERSION && \
|
||||
export ANDROID_NDK=\$ANDROID_NDK_HOME && \
|
||||
cd /root/wzp-build && \
|
||||
cargo ndk -t arm64-v8a \
|
||||
-o android/app/src/main/jniLibs \
|
||||
build --release -p wzp-android 2>&1" | tail -5 \
|
||||
|| die "Rust native build failed"
|
||||
|
||||
ssh_cmd "[ -f /root/wzp-build/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ]" \
|
||||
|| die "libwzp_android.so not found after build"
|
||||
|
||||
local so_size
|
||||
so_size=$(ssh_cmd "du -h /root/wzp-build/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so | cut -f1")
|
||||
echo " .so: $so_size"
|
||||
|
||||
# Generate debug keystore if missing
|
||||
ssh_cmd "[ -f /root/wzp-build/android/keystore/wzp-debug.jks ] || \
|
||||
(mkdir -p /root/wzp-build/android/keystore && \
|
||||
keytool -genkey -v \
|
||||
-keystore /root/wzp-build/android/keystore/wzp-debug.jks \
|
||||
-keyalg RSA -keysize 2048 -validity 10000 \
|
||||
-alias wzp-debug -storepass android -keypass android \
|
||||
-dname 'CN=WZP Debug' > /dev/null 2>&1)"
|
||||
|
||||
# Build debug APK
|
||||
log "Building debug APK..."
|
||||
ssh_cmd "export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 && \
|
||||
export ANDROID_HOME=\$HOME/android-sdk && \
|
||||
cd /root/wzp-build/android && \
|
||||
chmod +x ./gradlew && \
|
||||
./gradlew assembleDebug --no-daemon --warning-mode=none 2>&1" | tail -3 \
|
||||
|| die "Debug APK build failed"
|
||||
|
||||
# Build release APK (uses debug keystore for now)
|
||||
log "Building release APK..."
|
||||
# Copy debug keystore as release keystore (same password in build.gradle)
|
||||
ssh_cmd "cp /root/wzp-build/android/keystore/wzp-debug.jks /root/wzp-build/android/keystore/wzp-release.jks 2>/dev/null; true"
|
||||
ssh_cmd "export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 && \
|
||||
export ANDROID_HOME=\$HOME/android-sdk && \
|
||||
cd /root/wzp-build/android && \
|
||||
./gradlew assembleRelease --no-daemon --warning-mode=none 2>&1" | tail -3 \
|
||||
|| echo " (release APK failed — debug APK still available)"
|
||||
|
||||
log "Build complete!"
|
||||
ssh_cmd "find /root/wzp-build/android -name '*.apk' -path '*/outputs/apk/*' -exec ls -lh {} \;"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --transfer: Download APKs to local machine
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_transfer() {
|
||||
log "Downloading APKs..."
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
local ip
|
||||
ip=$(get_vm_ip)
|
||||
|
||||
# Debug APK
|
||||
local debug_apk
|
||||
debug_apk=$(ssh_cmd "find /root/wzp-build/android -name 'app-debug*.apk' -path '*/outputs/apk/*' | head -1")
|
||||
if [ -n "$debug_apk" ]; then
|
||||
scp_down "$debug_apk" "$OUTPUT_DIR/wzp-debug.apk"
|
||||
echo " debug: $OUTPUT_DIR/wzp-debug.apk ($(du -h "$OUTPUT_DIR/wzp-debug.apk" | cut -f1))"
|
||||
fi
|
||||
|
||||
# Release APK
|
||||
local release_apk
|
||||
release_apk=$(ssh_cmd "find /root/wzp-build/android -name 'app-release*.apk' -path '*/outputs/apk/*' | head -1" || true)
|
||||
if [ -n "$release_apk" ]; then
|
||||
scp_down "$release_apk" "$OUTPUT_DIR/wzp-release.apk"
|
||||
echo " release: $OUTPUT_DIR/wzp-release.apk ($(du -h "$OUTPUT_DIR/wzp-release.apk" | cut -f1))"
|
||||
fi
|
||||
|
||||
# Also copy the .so for inspection
|
||||
scp_down "/root/wzp-build/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so" "$OUTPUT_DIR/libwzp_android.so"
|
||||
echo " .so: $OUTPUT_DIR/libwzp_android.so"
|
||||
|
||||
log "Transfer complete!"
|
||||
echo ""
|
||||
echo " Install debug: adb install -r $OUTPUT_DIR/wzp-debug.apk"
|
||||
[ -f "$OUTPUT_DIR/wzp-release.apk" ] && echo " Install release: adb install -r $OUTPUT_DIR/wzp-release.apk"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --destroy: Delete the VM
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_destroy() {
|
||||
local name
|
||||
name=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||
if [ -z "$name" ]; then
|
||||
echo "No VM to destroy."
|
||||
return
|
||||
fi
|
||||
log "Deleting VM: $name"
|
||||
hcloud server delete "$name"
|
||||
echo " Done."
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Full build: create → build → transfer → destroy
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_full() {
|
||||
trap 'err "Build failed!"; do_destroy_quiet; exit 1' ERR
|
||||
|
||||
do_prepare
|
||||
|
||||
# Disable trap during build — release APK failure is non-fatal
|
||||
trap - ERR
|
||||
do_build
|
||||
do_transfer
|
||||
trap 'err "Build failed!"; do_destroy_quiet; exit 1' ERR
|
||||
|
||||
if [ "$KEEP_VM" = "1" ]; then
|
||||
log "VM kept alive (WZP_KEEP_VM=1). Destroy with: $0 --destroy"
|
||||
else
|
||||
do_destroy
|
||||
fi
|
||||
|
||||
log "All done!"
|
||||
echo ""
|
||||
echo " ┌──────────────────────────────────────────────────┐"
|
||||
echo " │ Debug APK: $OUTPUT_DIR/wzp-debug.apk"
|
||||
[ -f "$OUTPUT_DIR/wzp-release.apk" ] && \
|
||||
echo " │ Release APK: $OUTPUT_DIR/wzp-release.apk"
|
||||
echo " │"
|
||||
echo " │ Install: adb install -r $OUTPUT_DIR/wzp-debug.apk"
|
||||
echo " └──────────────────────────────────────────────────┘"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
case "${1:-}" in
|
||||
--prepare) do_prepare ;;
|
||||
--build) do_build ;;
|
||||
--transfer) do_transfer ;;
|
||||
--destroy) do_destroy ;;
|
||||
--upload) do_upload ;;
|
||||
--all)
|
||||
do_prepare
|
||||
do_build
|
||||
do_transfer
|
||||
log "VM still running. Destroy with: $0 --destroy"
|
||||
;;
|
||||
"")
|
||||
do_full
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [--prepare|--build|--transfer|--destroy|--all|--upload]"
|
||||
echo ""
|
||||
echo " (no args) Full build: create VM → build → download → destroy VM"
|
||||
echo " --prepare Create VM and install deps"
|
||||
echo " --build Build on existing VM"
|
||||
echo " --transfer Download APKs from VM"
|
||||
echo " --destroy Delete the VM"
|
||||
echo " --all prepare + build + transfer (VM persists)"
|
||||
echo " --upload Re-upload source to existing VM"
|
||||
echo ""
|
||||
echo "Environment:"
|
||||
echo " WZP_BRANCH=$BRANCH"
|
||||
echo " WZP_SERVER_TYPE=$SERVER_TYPE"
|
||||
echo " WZP_KEEP_VM=$KEEP_VM (set to 1 to skip auto-destroy)"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
416
scripts/build-android-docker.sh
Executable file
416
scripts/build-android-docker.sh
Executable file
@@ -0,0 +1,416 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# WZ Phone — Android APK build via Docker on remote host
|
||||
#
|
||||
# Replaces Hetzner Cloud VMs with a Docker container on SepehrHomeserverdk.
|
||||
# Persistent storage at /mnt/storage/manBuilder/data/{source,cache,keystore}.
|
||||
# Uploads APKs to rustypaste, then SCPs them back locally.
|
||||
#
|
||||
# Prerequisites:
|
||||
# - SSH config has "SepehrHomeserverdk" host entry
|
||||
# - SSH agent running with keys for both remote host and git.manko.yoga
|
||||
# - Docker installed on remote host
|
||||
# - /mnt/storage/manBuilder/.env with rusty_address and rusty_auth_token
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/build-android-docker.sh Full: prepare+pull+build+upload+transfer
|
||||
# ./scripts/build-android-docker.sh --prepare Build Docker image + sync keystores
|
||||
# ./scripts/build-android-docker.sh --pull Clone/update source from Gitea
|
||||
# ./scripts/build-android-docker.sh --build Build debug APK inside Docker
|
||||
# ./scripts/build-android-docker.sh --upload Upload APKs to rustypaste
|
||||
# ./scripts/build-android-docker.sh --transfer SCP APKs back to local machine
|
||||
# ./scripts/build-android-docker.sh --all pull+build+upload+transfer (image ready)
|
||||
#
|
||||
# Add --release to also build release APK:
|
||||
# ./scripts/build-android-docker.sh --build --release
|
||||
# ./scripts/build-android-docker.sh --all --release
|
||||
# ./scripts/build-android-docker.sh --release (full pipeline, debug+release)
|
||||
#
|
||||
# Environment variables (all optional):
|
||||
# WZP_BRANCH Branch to build (default: feat/android-voip-client)
|
||||
# =============================================================================
|
||||
|
||||
REMOTE_HOST="SepehrHomeserverdk"
|
||||
BASE_DIR="/mnt/storage/manBuilder"
|
||||
REPO_URL="ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git"
|
||||
BRANCH="${WZP_BRANCH:-feat/android-voip-client}"
|
||||
DOCKER_IMAGE="wzp-android-builder"
|
||||
LOCAL_OUTPUT_DIR="target/android-apk"
|
||||
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
||||
LOCAL_KEYSTORE_DIR="$PROJECT_DIR/android/keystore"
|
||||
|
||||
SSH_OPTS="-o ConnectTimeout=10 -o LogLevel=ERROR -o ServerAliveInterval=15 -o ServerAliveCountMax=4"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
log() { echo -e "\n\033[1;36m>>> $*\033[0m"; }
|
||||
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||
|
||||
ssh_cmd() {
|
||||
ssh -A $SSH_OPTS "$REMOTE_HOST" "$@"
|
||||
}
|
||||
|
||||
push_reminder() {
|
||||
echo ""
|
||||
echo " ┌──────────────────────────────────────────────────────────────────┐"
|
||||
echo " │ IMPORTANT: Push your changes to origin (Gitea) before build! │"
|
||||
echo " │ │"
|
||||
echo " │ The build fetches from: │"
|
||||
echo " │ ssh://git@git.manko.yoga:222/manawenuz/wz-phone.git │"
|
||||
echo " │ │"
|
||||
echo " │ Run: git push origin $BRANCH"
|
||||
echo " └──────────────────────────────────────────────────────────────────┘"
|
||||
echo ""
|
||||
read -r -p "Press Enter to continue (Ctrl-C to abort)... "
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --prepare: Create remote dirs, build Docker image, sync keystores
|
||||
# ---------------------------------------------------------------------------
|
||||
do_prepare() {
|
||||
log "Preparing remote environment..."
|
||||
ssh_cmd "mkdir -p $BASE_DIR/data/{source,cache/cargo-registry,cache/cargo-git,cache/target,cache/gradle,keystore}"
|
||||
|
||||
# Sync keystores (gitignored — won't exist after clone)
|
||||
REMOTE_HAS_KEYSTORE=$(ssh_cmd "[ -f $BASE_DIR/data/keystore/wzp-debug.jks ] && echo yes || echo no")
|
||||
if [ "$REMOTE_HAS_KEYSTORE" = "no" ]; then
|
||||
if [ -f "$LOCAL_KEYSTORE_DIR/wzp-debug.jks" ]; then
|
||||
log "Uploading keystores to remote persistent storage..."
|
||||
scp $SSH_OPTS \
|
||||
"$LOCAL_KEYSTORE_DIR/wzp-debug.jks" \
|
||||
"$LOCAL_KEYSTORE_DIR/wzp-release.jks" \
|
||||
"$REMOTE_HOST:$BASE_DIR/data/keystore/"
|
||||
echo " Keystores uploaded to $BASE_DIR/data/keystore/"
|
||||
else
|
||||
err "No keystores found locally at $LOCAL_KEYSTORE_DIR/"
|
||||
err "Build will generate a temporary debug keystore instead."
|
||||
fi
|
||||
else
|
||||
echo " Keystores already on remote."
|
||||
fi
|
||||
|
||||
# Upload Dockerfile from local (always use local version — no git dependency)
|
||||
log "Uploading Dockerfile to remote..."
|
||||
ssh_cmd "mkdir -p $BASE_DIR/data/source/scripts"
|
||||
scp $SSH_OPTS \
|
||||
"$PROJECT_DIR/scripts/Dockerfile.android-builder" \
|
||||
"$REMOTE_HOST:$BASE_DIR/data/source/scripts/Dockerfile.android-builder"
|
||||
|
||||
# Build Docker image
|
||||
log "Building Docker image (Debian 12 + Rust + Android SDK/NDK)..."
|
||||
ssh_cmd bash <<IMAGE_EOF
|
||||
set -euo pipefail
|
||||
docker build -t "$DOCKER_IMAGE" - < "$BASE_DIR/data/source/scripts/Dockerfile.android-builder"
|
||||
echo " Docker image '$DOCKER_IMAGE' ready."
|
||||
IMAGE_EOF
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --pull: Clone or update source from Gitea
|
||||
# ---------------------------------------------------------------------------
|
||||
do_pull() {
|
||||
push_reminder
|
||||
|
||||
log "Updating source (branch: $BRANCH)..."
|
||||
ssh_cmd bash <<PULL_EOF
|
||||
set -euo pipefail
|
||||
mkdir -p "$BASE_DIR/data/source" \
|
||||
"$BASE_DIR/data/cache/cargo-registry" \
|
||||
"$BASE_DIR/data/cache/cargo-git" \
|
||||
"$BASE_DIR/data/cache/target" \
|
||||
"$BASE_DIR/data/cache/gradle" \
|
||||
"$BASE_DIR/data/keystore"
|
||||
cd "$BASE_DIR/data/source"
|
||||
if [ -d .git ]; then
|
||||
echo " Fetching origin..."
|
||||
git fetch origin
|
||||
git checkout "$BRANCH" 2>/dev/null || git checkout -b "$BRANCH" "origin/$BRANCH"
|
||||
git reset --hard "origin/$BRANCH"
|
||||
else
|
||||
echo " Cloning repo..."
|
||||
cd "$BASE_DIR/data"
|
||||
rm -rf source
|
||||
git clone --branch "$BRANCH" "$REPO_URL" source
|
||||
cd source
|
||||
fi
|
||||
git submodule update --init || true
|
||||
echo " HEAD: \$(git log --oneline -1)"
|
||||
echo " Branch: \$(git branch --show-current)"
|
||||
PULL_EOF
|
||||
|
||||
# Inject keystores into source tree
|
||||
log "Injecting keystores into source tree..."
|
||||
ssh_cmd bash <<KS_EOF
|
||||
set -euo pipefail
|
||||
mkdir -p "$BASE_DIR/data/source/android/keystore"
|
||||
if [ -f "$BASE_DIR/data/keystore/wzp-debug.jks" ]; then
|
||||
cp "$BASE_DIR/data/keystore/wzp-debug.jks" "$BASE_DIR/data/source/android/keystore/"
|
||||
cp "$BASE_DIR/data/keystore/wzp-release.jks" "$BASE_DIR/data/source/android/keystore/"
|
||||
echo " Keystores ready (wzp-debug.jks + wzp-release.jks)"
|
||||
else
|
||||
echo " WARNING: No keystores in persistent storage — build will generate temporary ones"
|
||||
fi
|
||||
KS_EOF
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --build: Build APK inside Docker container
|
||||
# $1 = "1" to also build release APK (default: debug only)
|
||||
# ---------------------------------------------------------------------------
|
||||
do_build() {
|
||||
local build_release="${1:-0}"
|
||||
|
||||
if [ "$build_release" = "1" ]; then
|
||||
log "Building debug + release APKs inside Docker container..."
|
||||
else
|
||||
log "Building debug APK inside Docker container..."
|
||||
fi
|
||||
|
||||
ssh_cmd bash <<BUILD_EOF
|
||||
set -euo pipefail
|
||||
|
||||
# Ensure uid 1000 can write to mounted volumes
|
||||
# Use find to only chown files not already 1000:1000, ignore errors on stubborn files
|
||||
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache" \
|
||||
! -user 1000 -o ! -group 1000 2>/dev/null | \
|
||||
xargs -r chown 1000:1000 2>/dev/null || true
|
||||
|
||||
docker run --rm \
|
||||
--user 1000:1000 \
|
||||
-e BUILD_RELEASE="$build_release" \
|
||||
-v "$BASE_DIR/data/source:/build/source" \
|
||||
-v "$BASE_DIR/data/cache/cargo-registry:/home/builder/.cargo/registry" \
|
||||
-v "$BASE_DIR/data/cache/cargo-git:/home/builder/.cargo/git" \
|
||||
-v "$BASE_DIR/data/cache/target:/build/source/target" \
|
||||
-v "$BASE_DIR/data/cache/gradle:/home/builder/.gradle" \
|
||||
"$DOCKER_IMAGE" \
|
||||
bash -c '
|
||||
set -euo pipefail
|
||||
cd /build/source
|
||||
|
||||
echo ">>> Building Rust native library (arm64-v8a, release)..."
|
||||
|
||||
# Clean stale jniLibs so cargo-ndk re-copies libc++_shared.so
|
||||
rm -rf android/app/src/main/jniLibs/arm64-v8a
|
||||
|
||||
cargo ndk -t arm64-v8a \
|
||||
-o android/app/src/main/jniLibs \
|
||||
build --release -p wzp-android 2>&1 | tail -10
|
||||
|
||||
[ -f android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so ] || {
|
||||
echo "ERROR: libwzp_android.so not found after build"; exit 1;
|
||||
}
|
||||
echo " .so size: \$(du -h android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so | cut -f1)"
|
||||
|
||||
# Verify keystores exist (should have been injected by --pull)
|
||||
if [ -f android/keystore/wzp-debug.jks ] && [ -f android/keystore/wzp-release.jks ]; then
|
||||
echo " Keystores: wzp-debug.jks + wzp-release.jks (from persistent storage)"
|
||||
else
|
||||
echo "WARNING: Keystores missing — generating temporary debug keystore..."
|
||||
mkdir -p android/keystore
|
||||
keytool -genkey -v \
|
||||
-keystore android/keystore/wzp-debug.jks \
|
||||
-keyalg RSA -keysize 2048 -validity 10000 \
|
||||
-alias wzp-debug -storepass android -keypass android \
|
||||
-dname "CN=WZP Debug" 2>&1 | tail -1
|
||||
cp android/keystore/wzp-debug.jks android/keystore/wzp-release.jks
|
||||
fi
|
||||
|
||||
cd android
|
||||
chmod +x ./gradlew
|
||||
|
||||
echo ">>> Building debug APK..."
|
||||
./gradlew assembleDebug --no-daemon --warning-mode=none 2>&1 | tail -5
|
||||
|
||||
if [ "\${BUILD_RELEASE}" = "1" ]; then
|
||||
echo ">>> Building release APK..."
|
||||
./gradlew assembleRelease --no-daemon --warning-mode=none 2>&1 | tail -5 || \
|
||||
echo " (release build failed — debug APK still available)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo ">>> Build artifacts:"
|
||||
find . -name "*.apk" -path "*/outputs/apk/*" -exec ls -lh {} \;
|
||||
'
|
||||
BUILD_EOF
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --upload: Upload APKs to rustypaste
|
||||
# ---------------------------------------------------------------------------
|
||||
do_upload() {
|
||||
log "Uploading APKs to rustypaste..."
|
||||
|
||||
UPLOAD_RESULT=$(ssh_cmd bash <<'UPLOAD_EOF'
|
||||
set -euo pipefail
|
||||
|
||||
BASE_DIR="/mnt/storage/manBuilder"
|
||||
ENV_FILE="$BASE_DIR/.env"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "ERROR: $ENV_FILE not found — create it with rusty_address and rusty_auth_token" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
if [ -z "${rusty_address:-}" ] || [ -z "${rusty_auth_token:-}" ]; then
|
||||
echo "ERROR: rusty_address or rusty_auth_token not set in $ENV_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
upload_apk() {
|
||||
local apk="$1" label="$2"
|
||||
if [ -f "$apk" ]; then
|
||||
local url
|
||||
url=$(curl -s -F "file=@$apk" -H "Authorization: $rusty_auth_token" "$rusty_address")
|
||||
echo "$label: $url"
|
||||
fi
|
||||
}
|
||||
|
||||
DEBUG_APK=$(find "$BASE_DIR/data/source/android" -name "app-debug*.apk" -path "*/outputs/apk/*" 2>/dev/null | head -1)
|
||||
RELEASE_APK=$(find "$BASE_DIR/data/source/android" -name "app-release*.apk" -path "*/outputs/apk/*" 2>/dev/null | head -1)
|
||||
|
||||
upload_apk "${DEBUG_APK:-}" "debug"
|
||||
upload_apk "${RELEASE_APK:-}" "release"
|
||||
UPLOAD_EOF
|
||||
)
|
||||
|
||||
echo "$UPLOAD_RESULT"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# --transfer: SCP APKs back to local machine
|
||||
# ---------------------------------------------------------------------------
|
||||
do_transfer() {
|
||||
log "Downloading APKs to local machine..."
|
||||
|
||||
mkdir -p "$LOCAL_OUTPUT_DIR"
|
||||
|
||||
# Debug APK
|
||||
DEBUG_REMOTE=$(ssh_cmd "find $BASE_DIR/data/source/android -name 'app-debug*.apk' -path '*/outputs/apk/*' 2>/dev/null | head -1" || true)
|
||||
if [ -n "$DEBUG_REMOTE" ]; then
|
||||
scp $SSH_OPTS "$REMOTE_HOST:$DEBUG_REMOTE" "$LOCAL_OUTPUT_DIR/wzp-debug.apk"
|
||||
echo " debug: $LOCAL_OUTPUT_DIR/wzp-debug.apk ($(du -h "$LOCAL_OUTPUT_DIR/wzp-debug.apk" | cut -f1))"
|
||||
fi
|
||||
|
||||
# Release APK
|
||||
RELEASE_REMOTE=$(ssh_cmd "find $BASE_DIR/data/source/android -name 'app-release*.apk' -path '*/outputs/apk/*' 2>/dev/null | head -1" || true)
|
||||
if [ -n "$RELEASE_REMOTE" ]; then
|
||||
scp $SSH_OPTS "$REMOTE_HOST:$RELEASE_REMOTE" "$LOCAL_OUTPUT_DIR/wzp-release.apk"
|
||||
echo " release: $LOCAL_OUTPUT_DIR/wzp-release.apk ($(du -h "$LOCAL_OUTPUT_DIR/wzp-release.apk" | cut -f1))"
|
||||
fi
|
||||
|
||||
# Also grab the .so
|
||||
scp $SSH_OPTS "$REMOTE_HOST:$BASE_DIR/data/source/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so" \
|
||||
"$LOCAL_OUTPUT_DIR/libwzp_android.so" 2>/dev/null \
|
||||
&& echo " .so: $LOCAL_OUTPUT_DIR/libwzp_android.so" || true
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Summary banner
|
||||
# ---------------------------------------------------------------------------
|
||||
show_summary() {
|
||||
log "All done!"
|
||||
echo ""
|
||||
echo " ┌──────────────────────────────────────────────────────────────┐"
|
||||
[ -f "$LOCAL_OUTPUT_DIR/wzp-debug.apk" ] && \
|
||||
echo " │ Debug APK: $LOCAL_OUTPUT_DIR/wzp-debug.apk"
|
||||
[ -f "$LOCAL_OUTPUT_DIR/wzp-release.apk" ] && \
|
||||
echo " │ Release APK: $LOCAL_OUTPUT_DIR/wzp-release.apk"
|
||||
echo " │"
|
||||
if [ -n "${UPLOAD_RESULT:-}" ]; then
|
||||
echo " │ Rustypaste:"
|
||||
echo "$UPLOAD_RESULT" | while read -r line; do
|
||||
echo " │ $line"
|
||||
done
|
||||
echo " │"
|
||||
fi
|
||||
echo " │ Install: adb install -r $LOCAL_OUTPUT_DIR/wzp-debug.apk"
|
||||
echo " └──────────────────────────────────────────────────────────────┘"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Parse arguments
|
||||
# ---------------------------------------------------------------------------
|
||||
ACTION=""
|
||||
BUILD_RELEASE=0
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--release) BUILD_RELEASE=1 ;;
|
||||
--prepare|--pull|--build|--upload|--transfer|--all)
|
||||
if [ -n "$ACTION" ]; then
|
||||
err "Multiple actions specified: $ACTION and $arg"
|
||||
exit 1
|
||||
fi
|
||||
ACTION="$arg"
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [--prepare|--pull|--build|--upload|--transfer|--all] [--release]"
|
||||
echo ""
|
||||
echo "Actions:"
|
||||
echo " (no action) Full pipeline: pull → prepare → build → upload → transfer"
|
||||
echo " --prepare Build Docker image + sync keystores to remote"
|
||||
echo " --pull Clone/update source from Gitea + inject keystores"
|
||||
echo " --build Build debug APK inside Docker container"
|
||||
echo " --upload Upload APKs to rustypaste"
|
||||
echo " --transfer SCP APKs + .so back to local machine"
|
||||
echo " --all pull → build → upload → transfer (Docker image ready)"
|
||||
echo ""
|
||||
echo "Flags:"
|
||||
echo " --release Also build release APK (default: debug only)"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 # full pipeline, debug only"
|
||||
echo " $0 --release # full pipeline, debug + release"
|
||||
echo " $0 --build # debug APK only"
|
||||
echo " $0 --build --release # debug + release APKs"
|
||||
echo " $0 --all # iterate: pull+build+upload+transfer (debug)"
|
||||
echo " $0 --all --release # iterate with release too"
|
||||
echo ""
|
||||
echo "Environment:"
|
||||
echo " WZP_BRANCH=$BRANCH"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Dispatch
|
||||
# ---------------------------------------------------------------------------
|
||||
case "${ACTION:-}" in
|
||||
--prepare)
|
||||
do_prepare
|
||||
;;
|
||||
--pull)
|
||||
do_pull
|
||||
;;
|
||||
--build)
|
||||
do_build "$BUILD_RELEASE"
|
||||
;;
|
||||
--upload)
|
||||
do_upload
|
||||
;;
|
||||
--transfer)
|
||||
do_transfer
|
||||
;;
|
||||
--all)
|
||||
do_pull
|
||||
do_build "$BUILD_RELEASE"
|
||||
do_upload
|
||||
do_transfer
|
||||
show_summary
|
||||
;;
|
||||
"")
|
||||
do_pull
|
||||
do_prepare
|
||||
do_build "$BUILD_RELEASE"
|
||||
do_upload
|
||||
do_transfer
|
||||
show_summary
|
||||
;;
|
||||
esac
|
||||
240
scripts/build-android.sh
Executable file
240
scripts/build-android.sh
Executable file
@@ -0,0 +1,240 @@
|
||||
#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# WZ Phone — Android APK build script for Debian 12 (Bookworm)
|
||||
#
|
||||
# Sets up a complete build environment from scratch and produces a debug APK.
|
||||
# Idempotent — safe to run multiple times (skips already-installed components).
|
||||
#
|
||||
# Tested on: Debian 12 x86_64, cross-compiling to aarch64-linux-android
|
||||
#
|
||||
# Why these specific versions:
|
||||
#
|
||||
# cmake 3.25-3.28 (system package from apt)
|
||||
# cmake 3.25 (Debian 12) and 3.28 (Ubuntu 24.04) both work.
|
||||
# cmake 3.31+ has armv7/aarch64 flag conflicts in Android-Determine.cmake.
|
||||
# cmake 4.x drops cmake_minimum_required < 3.5.
|
||||
# Do NOT use pip cmake — it bundles its own modules with different bugs.
|
||||
# CRITICAL: must set ANDROID_NDK=$ANDROID_NDK_HOME (cmake checks ANDROID_NDK).
|
||||
#
|
||||
# NDK 26.1.10909125 (r26b)
|
||||
# NDK 27+ ships a newer libc++_shared.so with different scudo allocator
|
||||
# defaults. On Android 16 devices with MTE (Memory Tagging Extension)
|
||||
# enabled (e.g. Nothing A059), NDK 27's scudo crashes during malloc/calloc.
|
||||
# NDK 26.1 is the last stable version for these devices.
|
||||
# Matches build.gradle.kts: ndkVersion = "26.1.10909125"
|
||||
#
|
||||
# JDK 17 (openjdk-17-jdk-headless)
|
||||
# Gradle 8.5 + AGP 8.2.0 officially support JDK 17.
|
||||
# JDK 21 works for compilation but has Gradle daemon compat issues.
|
||||
#
|
||||
# Rust stable (currently 1.94.1)
|
||||
# Edition 2024, MSRV 1.85. Stable channel is fine.
|
||||
#
|
||||
# ANDROID_NDK=$ANDROID_NDK_HOME (BOTH must be set)
|
||||
# cmake's Android platform module checks ANDROID_NDK (no _HOME suffix).
|
||||
# cargo-ndk sets ANDROID_NDK_HOME. Both must point to the same path.
|
||||
#
|
||||
# Usage:
|
||||
# chmod +x scripts/build-android.sh
|
||||
# ./scripts/build-android.sh # build from current tree
|
||||
# WZP_CLONE=1 ./scripts/build-android.sh # clone fresh from git
|
||||
# WZP_COMMIT=2092245 ./scripts/build-android.sh # pin to specific commit
|
||||
#
|
||||
# Environment variables (all optional):
|
||||
# WZP_CLONE Set to 1 to clone from git instead of using current dir
|
||||
# WZP_REPO Git clone URL (default: ssh://git@git.manko.yoga:222/manawenuz/wz-phone)
|
||||
# WZP_BRANCH Branch to checkout (default: feat/android-voip-client)
|
||||
# WZP_COMMIT Commit to pin to (default: HEAD)
|
||||
# WZP_WORKDIR Build directory (default: /tmp/wzp-build)
|
||||
# ANDROID_API SDK platform level (default: 34)
|
||||
# NDK_VERSION NDK version string (default: 26.1.10909125)
|
||||
# =============================================================================
|
||||
set -euo pipefail
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration
|
||||
# ---------------------------------------------------------------------------
|
||||
CLONE="${WZP_CLONE:-0}"
|
||||
REPO="${WZP_REPO:-ssh://git@git.manko.yoga:222/manawenuz/wz-phone}"
|
||||
BRANCH="${WZP_BRANCH:-feat/android-voip-client}"
|
||||
COMMIT="${WZP_COMMIT:-}"
|
||||
WORKDIR="${WZP_WORKDIR:-/tmp/wzp-build}"
|
||||
ANDROID_API="${ANDROID_API:-34}"
|
||||
NDK_VERSION="${NDK_VERSION:-26.1.10909125}"
|
||||
|
||||
ANDROID_HOME="${ANDROID_HOME:-$HOME/android-sdk}"
|
||||
ANDROID_NDK_HOME="$ANDROID_HOME/ndk/$NDK_VERSION"
|
||||
# cmake checks ANDROID_NDK (not _HOME) — both must be set
|
||||
ANDROID_NDK="$ANDROID_NDK_HOME"
|
||||
JAVA_HOME="/usr/lib/jvm/java-17-openjdk-$(dpkg --print-architecture)"
|
||||
CMDLINE_TOOLS_URL="https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip"
|
||||
|
||||
export ANDROID_HOME ANDROID_NDK_HOME ANDROID_NDK JAVA_HOME
|
||||
export PATH="$JAVA_HOME/bin:$ANDROID_HOME/cmdline-tools/latest/bin:$ANDROID_HOME/platform-tools:$HOME/.cargo/bin:$PATH"
|
||||
|
||||
log() { echo -e "\n\033[1;36m>>> $*\033[0m"; }
|
||||
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; exit 1; }
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 1: System packages (cmake 3.25, JDK 17, make, git, etc.)
|
||||
# ---------------------------------------------------------------------------
|
||||
log "Installing system packages"
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
apt-get update -qq
|
||||
apt-get install -y -qq \
|
||||
build-essential \
|
||||
cmake \
|
||||
curl \
|
||||
git \
|
||||
libssl-dev \
|
||||
pkg-config \
|
||||
unzip \
|
||||
wget \
|
||||
zip \
|
||||
openjdk-17-jdk-headless \
|
||||
2>/dev/null
|
||||
|
||||
# Verify critical versions
|
||||
log "Verifying build environment"
|
||||
echo " cmake: $(cmake --version | head -1)"
|
||||
echo " java: $(java -version 2>&1 | head -1)"
|
||||
echo " make: $(make --version | head -1)"
|
||||
|
||||
CMAKE_MAJOR=$(cmake --version | head -1 | grep -oP '\d+' | head -1)
|
||||
CMAKE_MINOR=$(cmake --version | head -1 | grep -oP '\d+' | sed -n '2p')
|
||||
if [ "$CMAKE_MAJOR" -gt 3 ] || { [ "$CMAKE_MAJOR" -eq 3 ] && [ "$CMAKE_MINOR" -gt 30 ]; }; then
|
||||
err "cmake $(cmake --version | head -1) is too new! Need cmake <= 3.28.x. cmake 3.31+ has Android cross-compilation bugs."
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 2: Rust toolchain
|
||||
# ---------------------------------------------------------------------------
|
||||
log "Setting up Rust toolchain"
|
||||
if ! command -v rustup &>/dev/null; then
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable
|
||||
source "$HOME/.cargo/env"
|
||||
fi
|
||||
rustup default stable
|
||||
rustup target add aarch64-linux-android
|
||||
echo " rustc: $(rustc --version)"
|
||||
echo " cargo: $(cargo --version)"
|
||||
|
||||
if ! command -v cargo-ndk &>/dev/null; then
|
||||
log "Installing cargo-ndk"
|
||||
cargo install cargo-ndk
|
||||
fi
|
||||
echo " ndk: $(cargo ndk --version)"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 3: Android SDK + NDK 26.1
|
||||
# ---------------------------------------------------------------------------
|
||||
log "Setting up Android SDK + NDK $NDK_VERSION"
|
||||
if [ ! -f "$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager" ]; then
|
||||
log "Downloading Android command-line tools"
|
||||
mkdir -p "$ANDROID_HOME/cmdline-tools"
|
||||
TMPZIP=$(mktemp /tmp/cmdline-tools-XXXXX.zip)
|
||||
wget -q -O "$TMPZIP" "$CMDLINE_TOOLS_URL"
|
||||
unzip -qo "$TMPZIP" -d "$ANDROID_HOME/cmdline-tools"
|
||||
mv "$ANDROID_HOME/cmdline-tools/cmdline-tools" "$ANDROID_HOME/cmdline-tools/latest" 2>/dev/null || true
|
||||
rm -f "$TMPZIP"
|
||||
fi
|
||||
|
||||
yes | sdkmanager --licenses >/dev/null 2>&1 || true
|
||||
|
||||
if [ ! -d "$ANDROID_NDK_HOME" ]; then
|
||||
log "Installing NDK $NDK_VERSION (this takes a few minutes)"
|
||||
sdkmanager --install \
|
||||
"platforms;android-${ANDROID_API}" \
|
||||
"build-tools;${ANDROID_API}.0.0" \
|
||||
"ndk;${NDK_VERSION}" \
|
||||
"platform-tools" \
|
||||
2>&1 | grep -v "^\[" || true
|
||||
fi
|
||||
|
||||
[ -d "$ANDROID_NDK_HOME" ] || err "NDK not found at $ANDROID_NDK_HOME"
|
||||
echo " NDK: $ANDROID_NDK_HOME"
|
||||
echo " SDK: $ANDROID_HOME"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 4: Source code
|
||||
# ---------------------------------------------------------------------------
|
||||
if [ "$CLONE" = "1" ]; then
|
||||
log "Cloning $REPO (branch: $BRANCH)"
|
||||
if [ -d "$WORKDIR/.git" ]; then
|
||||
cd "$WORKDIR"
|
||||
git fetch origin
|
||||
else
|
||||
rm -rf "$WORKDIR"
|
||||
git clone --branch "$BRANCH" --recurse-submodules "$REPO" "$WORKDIR"
|
||||
cd "$WORKDIR"
|
||||
fi
|
||||
git checkout "$BRANCH"
|
||||
git pull origin "$BRANCH" || true
|
||||
git submodule update --init --recursive
|
||||
|
||||
if [ -n "$COMMIT" ]; then
|
||||
log "Pinning to commit $COMMIT"
|
||||
git checkout "$COMMIT"
|
||||
fi
|
||||
else
|
||||
# Use current directory (assume we're in the repo root)
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
WORKDIR="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$WORKDIR"
|
||||
[ -f "Cargo.toml" ] || err "Not in repo root. Run from repo root or set WZP_CLONE=1"
|
||||
fi
|
||||
|
||||
echo " HEAD: $(git log --oneline -1)"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 5: Build native Rust library (.so)
|
||||
# ---------------------------------------------------------------------------
|
||||
log "Building Rust native library (arm64-v8a, release)"
|
||||
cargo ndk -t arm64-v8a \
|
||||
-o "$WORKDIR/android/app/src/main/jniLibs" \
|
||||
build --release -p wzp-android
|
||||
|
||||
SO="$WORKDIR/android/app/src/main/jniLibs/arm64-v8a/libwzp_android.so"
|
||||
[ -f "$SO" ] || err ".so not found at $SO"
|
||||
echo " Built: $SO ($(du -h "$SO" | cut -f1))"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 6: Generate debug keystore (if missing)
|
||||
# ---------------------------------------------------------------------------
|
||||
KEYSTORE="$WORKDIR/android/keystore/wzp-debug.jks"
|
||||
if [ ! -f "$KEYSTORE" ]; then
|
||||
log "Generating debug keystore"
|
||||
mkdir -p "$(dirname "$KEYSTORE")"
|
||||
keytool -genkey -v \
|
||||
-keystore "$KEYSTORE" \
|
||||
-keyalg RSA -keysize 2048 -validity 10000 \
|
||||
-alias wzp-debug \
|
||||
-storepass android -keypass android \
|
||||
-dname "CN=WZP Debug" 2>&1 | tail -1
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 7: Build Android APK
|
||||
# ---------------------------------------------------------------------------
|
||||
log "Building APK (debug)"
|
||||
cd "$WORKDIR/android"
|
||||
chmod +x ./gradlew
|
||||
./gradlew assembleDebug --no-daemon --warning-mode=none
|
||||
|
||||
APK=$(find . -name "app-debug*.apk" -path "*/outputs/apk/*" | head -1)
|
||||
[ -n "$APK" ] || err "APK not found"
|
||||
APK_ABS="$(cd "$(dirname "$APK")" && pwd)/$(basename "$APK")"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Done
|
||||
# ---------------------------------------------------------------------------
|
||||
log "Build complete!"
|
||||
echo ""
|
||||
echo " ┌──────────────────────────────────────────────────────────┐"
|
||||
echo " │ APK: $APK_ABS"
|
||||
echo " │ Size: $(du -h "$APK_ABS" | cut -f1)"
|
||||
echo " │ SHA256: $(sha256sum "$APK_ABS" | cut -d' ' -f1)"
|
||||
echo " └──────────────────────────────────────────────────────────┘"
|
||||
echo ""
|
||||
echo " Install: adb install -r $APK_ABS"
|
||||
echo ""
|
||||
161
scripts/build-linux-docker.sh
Executable file
161
scripts/build-linux-docker.sh
Executable file
@@ -0,0 +1,161 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Build WarzonePhone Linux x86_64 binaries via Docker on SepehrHomeserverdk.
|
||||
# Reuses same Docker image as Android build (has Rust + cmake + build tools).
|
||||
# Fire and forget — notifies via ntfy.sh/wzp with rustypaste URL.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/build-linux-docker.sh Build + upload + notify
|
||||
# ./scripts/build-linux-docker.sh --pull Git pull before building
|
||||
# ./scripts/build-linux-docker.sh --clean Clean Rust target cache
|
||||
# ./scripts/build-linux-docker.sh --install Download binaries locally after build
|
||||
|
||||
REMOTE_HOST="SepehrHomeserverdk"
|
||||
BASE_DIR="/mnt/storage/manBuilder"
|
||||
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||
LOCAL_OUTPUT="target/linux-x86_64"
|
||||
SSH_OPTS="-o ConnectTimeout=15 -o ServerAliveInterval=15 -o ServerAliveCountMax=4 -o LogLevel=ERROR"
|
||||
|
||||
DO_PULL=0
|
||||
DO_CLEAN=0
|
||||
DO_INSTALL=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--pull) DO_PULL=1 ;;
|
||||
--clean) DO_CLEAN=1 ;;
|
||||
--install) DO_INSTALL=1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||
|
||||
ssh_cmd() { ssh $SSH_OPTS "$REMOTE_HOST" "$@"; }
|
||||
|
||||
# Upload build script to remote
|
||||
log "Uploading build script..."
|
||||
ssh_cmd "cat > /tmp/wzp-linux-build.sh" <<'REMOTE_SCRIPT'
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
BASE_DIR="/mnt/storage/manBuilder"
|
||||
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||
DO_PULL="${1:-0}"
|
||||
DO_CLEAN="${2:-0}"
|
||||
|
||||
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||
|
||||
trap 'notify "WZP Linux build FAILED! Check /tmp/wzp-linux-build.log"' ERR
|
||||
|
||||
if [ "$DO_PULL" = "1" ]; then
|
||||
echo ">>> Pulling latest..."
|
||||
cd "$BASE_DIR/data/source"
|
||||
git checkout -- . 2>/dev/null || true
|
||||
git pull origin feat/android-voip-client 2>&1 | tail -3
|
||||
fi
|
||||
|
||||
if [ "$DO_CLEAN" = "1" ]; then
|
||||
echo ">>> Cleaning Linux target cache..."
|
||||
rm -rf "$BASE_DIR/data/cache-linux/target"
|
||||
fi
|
||||
|
||||
# Ensure cache dirs exist (separate from Android cache)
|
||||
mkdir -p "$BASE_DIR/data/cache-linux/target" \
|
||||
"$BASE_DIR/data/cache-linux/cargo-registry" \
|
||||
"$BASE_DIR/data/cache-linux/cargo-git"
|
||||
|
||||
# Fix perms
|
||||
find "$BASE_DIR/data/source" "$BASE_DIR/data/cache-linux" \
|
||||
! -user 1000 -o ! -group 1000 2>/dev/null | \
|
||||
xargs -r chown 1000:1000 2>/dev/null || true
|
||||
|
||||
notify "WZP Linux x86_64 build started..."
|
||||
|
||||
echo ">>> Building in Docker..."
|
||||
docker run --rm --user 1000:1000 \
|
||||
-v "$BASE_DIR/data/source:/build/source" \
|
||||
-v "$BASE_DIR/data/cache-linux/cargo-registry:/home/builder/.cargo/registry" \
|
||||
-v "$BASE_DIR/data/cache-linux/cargo-git:/home/builder/.cargo/git" \
|
||||
-v "$BASE_DIR/data/cache-linux/target:/build/source/target" \
|
||||
wzp-android-builder bash -c '
|
||||
set -euo pipefail
|
||||
cd /build/source
|
||||
|
||||
echo ">>> Building relay + client + web + bench..."
|
||||
cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-web --bin wzp-bench 2>&1 | tail -5
|
||||
|
||||
echo ">>> Building audio client..."
|
||||
cargo build --release --bin wzp-client --features audio 2>&1 | tail -3
|
||||
cp target/release/wzp-client target/release/wzp-client-audio
|
||||
cargo build --release --bin wzp-client 2>&1 | tail -3
|
||||
|
||||
echo ">>> Binaries:"
|
||||
ls -lh target/release/wzp-relay target/release/wzp-client target/release/wzp-client-audio target/release/wzp-web target/release/wzp-bench
|
||||
|
||||
echo ">>> Packaging..."
|
||||
tar czf /tmp/wzp-linux-x86_64.tar.gz \
|
||||
-C target/release wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench
|
||||
|
||||
echo "BINARIES_BUILT"
|
||||
'
|
||||
|
||||
# Upload to rustypaste
|
||||
echo ">>> Uploading to rustypaste..."
|
||||
source "$BASE_DIR/.env"
|
||||
TARBALL="$BASE_DIR/data/cache-linux/target/release/../../../wzp-linux-x86_64.tar.gz"
|
||||
# Docker wrote to /tmp inside container, copy from target mount
|
||||
docker run --rm \
|
||||
-v "$BASE_DIR/data/cache-linux/target:/build/target" \
|
||||
wzp-android-builder bash -c \
|
||||
"cp /build/target/release/wzp-relay /build/target/release/wzp-client /build/target/release/wzp-client-audio /build/target/release/wzp-web /build/target/release/wzp-bench /tmp/ && tar czf /tmp/wzp-linux-x86_64.tar.gz -C /tmp wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench && cat /tmp/wzp-linux-x86_64.tar.gz" \
|
||||
> /tmp/wzp-linux-x86_64.tar.gz
|
||||
|
||||
URL=$(curl -s -F "file=@/tmp/wzp-linux-x86_64.tar.gz" -H "Authorization: $rusty_auth_token" "$rusty_address")
|
||||
if [ -n "$URL" ]; then
|
||||
echo "UPLOAD_URL=$URL"
|
||||
notify "WZP Linux x86_64 binaries ready! $URL"
|
||||
echo ">>> Done! Binaries at: $URL"
|
||||
else
|
||||
notify "WZP Linux build FAILED - upload error"
|
||||
echo "ERROR: upload failed"
|
||||
exit 1
|
||||
fi
|
||||
REMOTE_SCRIPT
|
||||
|
||||
ssh_cmd "chmod +x /tmp/wzp-linux-build.sh"
|
||||
|
||||
# Run in tmux
|
||||
log "Starting Linux build in tmux..."
|
||||
ssh_cmd "tmux kill-session -t wzp-linux 2>/dev/null; true"
|
||||
ssh_cmd "tmux new-session -d -s wzp-linux '/tmp/wzp-linux-build.sh $DO_PULL $DO_CLEAN 2>&1 | tee /tmp/wzp-linux-build.log'"
|
||||
|
||||
log "Build running! Notification on ntfy.sh/wzp when done."
|
||||
echo ""
|
||||
echo " Monitor: ssh $REMOTE_HOST 'tail -f /tmp/wzp-linux-build.log'"
|
||||
echo " Status: ssh $REMOTE_HOST 'tail -5 /tmp/wzp-linux-build.log'"
|
||||
echo ""
|
||||
|
||||
# Optionally wait and download
|
||||
if [ "$DO_INSTALL" = "1" ]; then
|
||||
log "Waiting for build..."
|
||||
while true; do
|
||||
sleep 15
|
||||
if ssh_cmd "grep -q 'UPLOAD_URL\|ERROR' /tmp/wzp-linux-build.log 2>/dev/null"; then
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
URL=$(ssh_cmd "grep UPLOAD_URL /tmp/wzp-linux-build.log | tail -1 | cut -d= -f2")
|
||||
if [ -n "$URL" ]; then
|
||||
log "Downloading binaries..."
|
||||
mkdir -p "$LOCAL_OUTPUT"
|
||||
curl -s -o "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz" "$URL"
|
||||
tar xzf "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz" -C "$LOCAL_OUTPUT/"
|
||||
rm "$LOCAL_OUTPUT/wzp-linux-x86_64.tar.gz"
|
||||
ls -lh "$LOCAL_OUTPUT"/wzp-*
|
||||
log "Done! Binaries in $LOCAL_OUTPUT/"
|
||||
else
|
||||
err "Build failed"
|
||||
fi
|
||||
fi
|
||||
122
scripts/build-linux-notify.sh
Executable file
122
scripts/build-linux-notify.sh
Executable file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Build WarzonePhone Linux x86_64 binaries via Hetzner Cloud VPS.
|
||||
# Fire and forget — notifies via ntfy.sh/wzp with rustypaste URL.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/build-linux-notify.sh Full: create VM → build → upload → notify → destroy
|
||||
# ./scripts/build-linux-notify.sh --keep Keep VM after build
|
||||
# ./scripts/build-linux-notify.sh --pull Git pull (for existing VM)
|
||||
|
||||
SSH_KEY_NAME="wz"
|
||||
SSH_KEY_PATH="/Users/manwe/CascadeProjects/wzp"
|
||||
SERVER_TYPE="cx33"
|
||||
IMAGE="debian-12"
|
||||
SERVER_NAME="wzp-linux-builder"
|
||||
NTFY_TOPIC="https://ntfy.sh/wzp"
|
||||
LOCAL_OUTPUT="target/linux-x86_64"
|
||||
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
||||
|
||||
SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=15 -o ServerAliveInterval=15 -o LogLevel=ERROR"
|
||||
|
||||
KEEP_VM=0
|
||||
DO_PULL=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--keep) KEEP_VM=1 ;;
|
||||
--pull) DO_PULL=1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
log() { echo -e "\033[1;36m>>> $*\033[0m"; }
|
||||
err() { echo -e "\033[1;31mERROR: $*\033[0m" >&2; }
|
||||
|
||||
get_vm_ip() {
|
||||
hcloud server list -o columns=name,ipv4 -o noheader 2>/dev/null | grep "$SERVER_NAME" | awk '{print $2}' | tr -d ' '
|
||||
}
|
||||
|
||||
ssh_cmd() {
|
||||
local ip=$(get_vm_ip)
|
||||
[ -n "$ip" ] || { err "No VM found"; exit 1; }
|
||||
ssh $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip" "$@"
|
||||
}
|
||||
|
||||
notify() { curl -s -d "$1" "$NTFY_TOPIC" > /dev/null 2>&1 || true; }
|
||||
|
||||
# --- Create VM if needed ---
|
||||
existing=$(hcloud server list -o columns=name -o noheader 2>/dev/null | grep "$SERVER_NAME" | tr -d ' ' || true)
|
||||
if [ -z "$existing" ]; then
|
||||
log "Creating Hetzner VM ($SERVER_TYPE, $IMAGE)..."
|
||||
hcloud server create --name "$SERVER_NAME" --type "$SERVER_TYPE" --image "$IMAGE" --ssh-key "$SSH_KEY_NAME" --location fsn1 --quiet
|
||||
|
||||
log "Waiting for SSH..."
|
||||
ip=$(get_vm_ip)
|
||||
for i in $(seq 1 30); do
|
||||
ssh $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip" "echo ok" &>/dev/null && break
|
||||
sleep 2
|
||||
done
|
||||
|
||||
log "Installing deps..."
|
||||
ssh_cmd "apt-get update -qq && apt-get install -y -qq build-essential cmake pkg-config libasound2-dev libssl-dev curl git > /dev/null 2>&1"
|
||||
ssh_cmd "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable > /dev/null 2>&1"
|
||||
fi
|
||||
|
||||
# --- Upload source ---
|
||||
log "Uploading source..."
|
||||
ip=$(get_vm_ip)
|
||||
rsync -az --delete \
|
||||
--exclude='target' --exclude='.git' --exclude='.claude' \
|
||||
--exclude='node_modules' --exclude='dist' --exclude='android/app/build' \
|
||||
-e "ssh $SSH_OPTS -i $SSH_KEY_PATH" \
|
||||
"$PROJECT_DIR/" "root@$ip:/root/wzp-build/"
|
||||
|
||||
# --- Build ---
|
||||
log "Building all binaries..."
|
||||
notify "WZP Linux build started..."
|
||||
|
||||
ssh_cmd "source ~/.cargo/env && cd /root/wzp-build && \
|
||||
cargo build --release --bin wzp-relay --bin wzp-client --bin wzp-web --bin wzp-bench 2>&1 | tail -5 && \
|
||||
echo '--- audio client ---' && \
|
||||
cargo build --release --bin wzp-client --features audio 2>&1 | tail -3 && \
|
||||
cp target/release/wzp-client target/release/wzp-client-audio && \
|
||||
cargo build --release --bin wzp-client 2>&1 | tail -3 && \
|
||||
echo 'BUILD_DONE' && \
|
||||
ls -lh target/release/wzp-relay target/release/wzp-client target/release/wzp-client-audio target/release/wzp-web target/release/wzp-bench"
|
||||
|
||||
# --- Package + upload to rustypaste ---
|
||||
log "Packaging and uploading..."
|
||||
UPLOAD_URL=$(ssh_cmd "cd /root/wzp-build && \
|
||||
tar czf /tmp/wzp-linux-x86_64.tar.gz \
|
||||
-C target/release wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench \
|
||||
-C /root/wzp-build/crates/wzp-web/static index.html audio-processor.js 2>/dev/null && \
|
||||
curl -s -F 'file=@/tmp/wzp-linux-x86_64.tar.gz' \
|
||||
-H 'Authorization: DAxAAGghkn1WKv1+RpPKkg==' \
|
||||
https://paste.dk.manko.yoga")
|
||||
|
||||
if [ -n "$UPLOAD_URL" ]; then
|
||||
notify "WZP Linux binaries ready! $UPLOAD_URL"
|
||||
log "Uploaded: $UPLOAD_URL"
|
||||
else
|
||||
notify "WZP Linux build FAILED"
|
||||
err "Upload failed"
|
||||
fi
|
||||
|
||||
# --- Transfer locally ---
|
||||
log "Downloading binaries..."
|
||||
mkdir -p "$LOCAL_OUTPUT"
|
||||
for bin in wzp-relay wzp-client wzp-client-audio wzp-web wzp-bench; do
|
||||
scp $SSH_OPTS -i "$SSH_KEY_PATH" "root@$ip:/root/wzp-build/target/release/$bin" "$LOCAL_OUTPUT/$bin" 2>/dev/null
|
||||
done
|
||||
ls -lh "$LOCAL_OUTPUT"/wzp-*
|
||||
|
||||
# --- Cleanup ---
|
||||
if [ "$KEEP_VM" = "1" ]; then
|
||||
log "VM kept alive. Destroy: hcloud server delete $SERVER_NAME"
|
||||
else
|
||||
log "Destroying VM..."
|
||||
hcloud server delete "$SERVER_NAME"
|
||||
fi
|
||||
|
||||
log "Done!"
|
||||
echo " Deploy: scp $LOCAL_OUTPUT/wzp-relay user@server:~/wzp/"
|
||||
10
skills-lock.json
Normal file
10
skills-lock.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"version": 1,
|
||||
"skills": {
|
||||
"caveman": {
|
||||
"source": "JuliusBrussee/caveman",
|
||||
"sourceType": "github",
|
||||
"computedHash": "aa7939fc4d1fe31484090290da77f2d21e026aa4b34b329d00e6630feb985d75"
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user