62 Commits

Author SHA1 Message Date
Siavash Sameni
27c69d8982 Fix unused variable warning in test
Some checks failed
Build & Release / release (push) Has been cancelled
CI / test (push) Successful in 2m35s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 08:40:40 +04:00
Siavash Sameni
2cb8519c95 Suppress non_snake_case warning for Win32 FILETIME struct
Some checks failed
CI / test (push) Failing after 1m40s
Build & Release / release (push) Successful in 4m46s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 08:34:21 +04:00
Siavash Sameni
9ca124cb76 Fix CPU reporting: Android support, TCP remote CPU parsing
All checks were successful
CI / test (push) Successful in 2m33s
Build & Release / release (push) Successful in 5m11s
- Add target_os = "android" to CPU sampler (reads /proc/stat like Linux)
- Parse remote CPU from interleaved TCP status messages in BOTH mode
- Add dedicated status reader for TX-only mode (reads server's 12-byte
  status messages to get remote CPU and enable speed adaptation)
- Add 3 CPU integration tests: local CPU, TCP BOTH remote, TCP TX-only

Fixes: Android always showing cpu: 0%/0%, TCP remote CPU always 0%
on all platforms (btest-to-btest and btest-to-MikroTik).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 08:28:45 +04:00
Siavash Sameni
c06a4d0c9a Add public server links to README, fix dead_code warnings
All checks were successful
CI / test (push) Successful in 2m12s
- Add Free Public Servers section with US/EU endpoints and usage examples
- Add Server Pro section documenting the optional pro build
- Add Android/Termux to supported platforms and installation guide
- Gate pro-only public functions with #[cfg(feature = "pro")] to eliminate
  6 dead_code warnings in the standard build

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 19:57:18 +04:00
Siavash Sameni
817535a0ad Add Android aarch64/armv7 targets to release builds
Some checks failed
CI / test (push) Failing after 1m31s
Build & Release / release (push) Successful in 4m49s
Adds ARMv8 (aarch64-linux-android) and ARMv7 (armv7-linux-androideabi)
builds for Termux/Android using the Android NDK r27c. Release artifacts
now include btest-android-aarch64.tar.gz and btest-android-armv7.tar.gz.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 19:24:35 +04:00
Siavash Sameni
ba02ed36b5 Merge feature/server-pro into main
Adds btest-server-pro: multi-user bandwidth test server with SQLite DB,
per-IP quotas (daily/weekly/monthly), inline byte budget enforcement,
TCP multi-connection support, MD5 auth, web dashboard with Chart.js
graphs, quota progress bars, and JSON export.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 18:44:16 +04:00
Siavash Sameni
4cdcc4e6c4 Public btest server: byte budget, multi-conn, web dashboard, quotas
- Inline byte budget in BandwidthState prevents quota overshoot at any
  link speed (TX/RX loops check per-packet, not per-interval)
- TCP multi-connection support for server-pro (session tokens, secondary
  connection joins, delegates to standard multi-conn handler)
- MD5 password verification against stored raw passwords in user DB
- Web dashboard: quota progress bars (daily/weekly/monthly), JSON export
  endpoint (/api/ip/{ip}/export), quota API (/api/ip/{ip}/quota)
- Landing page with usage instructions, UDP NAT warning, credentials
- Fix IP usage double-counting bug in QuotaManager::record_usage
- UserDb now stores DB path and raw passwords for MD5 auth
- 10 enforcer tests (4 new: budget calc, budget stop, budget exhausted,
  unlimited passthrough)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 18:43:09 +04:00
Siavash Sameni
7dd4820d2c Add all directional IP quota CLI flags
New flags: --ip-weekly-in, --ip-weekly-out, --ip-monthly-in, --ip-monthly-out
Each defaults to the combined flag value (--ip-weekly, --ip-monthly) if not set.
Specific overrides combined: --ip-daily-in 1G --ip-daily 5G → inbound=1G, outbound=5G

Example:
  btest-server-pro --users-db btest.db \
    --ip-daily 10G \
    --ip-daily-in 3G \
    --ip-daily-out 7G \
    --ip-monthly 100G \
    --ip-monthly-in 30G \
    --ip-monthly-out 70G

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 16:40:39 +04:00
Siavash Sameni
2087e5a75f Public server: separate in/out IP quotas, web dashboard scaffold, test intervals
3 agents worked in parallel:

1. DB schema (user_db.rs):
   - ip_usage: inbound_bytes/outbound_bytes columns (renamed from tx/rx)
   - test_intervals table for per-second graphing data
   - Directional methods: get_ip_daily_inbound/outbound, record_ip_inbound/outbound
   - Query methods: get_session_intervals, get_ip_sessions, get_ip_stats
   - New structs: IntervalData, SessionSummary, IpStats

2. Quota (quota.rs):
   - Direction enum (Inbound/Outbound/Both)
   - 6 new directional IP limits (daily/weekly/monthly × in/out)
   - check_ip() now takes direction parameter
   - record_usage() takes (inbound_bytes, outbound_bytes)

3. Web dashboard (web/):
   - Stub router with axum (will be expanded)
   - Templates: index.html + dashboard.html with Chart.js
   - Dependencies: axum, tower-http, serde, serde_json, askama (optional, pro feature)

CLI additions:
  --ip-daily-in, --ip-daily-out, --web-port, --shared-password

64 tests, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 16:30:18 +04:00
Siavash Sameni
9e3cd6d6d4 Wire data transfer into pro server — full quota enforcement working
The pro server now runs actual bandwidth tests with concurrent quota
enforcement. Data flows through the standard btest TCP/UDP handlers
while the QuotaEnforcer monitors usage every N seconds.

Public API added to btest_rs::server:
- run_tcp_test(stream, cmd, state) — TCP test with external state
- run_udp_test(stream, peer, cmd, state, port) — UDP with external state

These allow the pro server to share BandwidthState between the test
handlers and the enforcer, enabling mid-session quota termination.

Verified end-to-end:
- Test 1: TCP download at 70 Gbps, ran full duration
- Test 2: TCP upload, KILLED mid-session by enforcer after 3 checks
  (user_daily_quota_exceeded at 23.8 GB vs 50 MB limit)
- Test 3: REJECTED at connection time (quota already used up)

64 tests, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 15:59:48 +04:00
Siavash Sameni
4403eae4b9 Wire quota enforcement into pro server loop
New server_loop.rs:
- Custom accept loop with pre-connection IP quota check
- DB-based MD5 authentication (verifies user exists + enabled)
- Pre-test user quota check (reject if already exceeded)
- Session tracking in DB (start_session/end_session)
- QuotaEnforcer spawned alongside each test
- Post-test usage recording to both user + IP tables
- Syslog events for auth, quota rejection, test start/end

Full flow:
  1. Accept connection → check IP quota → reject if exceeded
  2. Handshake + auth → verify user in DB → reject if disabled/not found
  3. Check user quota → reject if daily/weekly/monthly exceeded
  4. Start session → spawn enforcer (checks every N seconds)
  5. Run test → enforcer stops it if quota hit or max_duration reached
  6. Record usage → persist to DB → disconnect IP tracker

TODO: Wire actual TX/RX data loops (currently only enforcer runs,
data transfer not yet delegated from pro server to standard handlers)

64 tests, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 15:45:45 +04:00
Siavash Sameni
c08bcffaff Add mid-session quota enforcement with 6 tests
New enforcer.rs module runs alongside active tests:
- Periodic quota checks (default every 10s, configurable --quota-check-interval)
- Max duration enforcement — forcefully stops test after limit
- User quotas: daily/weekly/monthly checked against DB + current session
- IP quotas: daily/weekly/monthly checked against DB + current session
- Flush session bytes to DB for accurate cross-session tracking
- Sets state.running=false to gracefully terminate on quota breach

StopReason enum tracks why a test was stopped:
  MaxDuration, UserDailyQuota, UserWeeklyQuota, UserMonthlyQuota,
  IpDailyQuota, IpWeeklyQuota, IpMonthlyQuota, ClientDisconnected

Tests (6 new, all passing):
- test_enforcer_max_duration: stops after max_duration seconds
- test_enforcer_client_disconnect: detects normal client exit
- test_enforcer_user_daily_quota_exceeded: stops when user quota hit
- test_enforcer_ip_daily_quota_exceeded: stops when IP quota hit
- test_enforcer_under_quota_runs_normally: doesn't stop if under limits
- test_enforcer_flush_records_usage: verifies DB persistence

64 total tests (58 standard + 6 enforcer), all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 15:20:26 +04:00
Siavash Sameni
d61fdb1b94 Add monthly quotas, per-IP limits, user management CLI
Quota system now supports:
- Per-user: daily, weekly, monthly limits
- Per-IP: daily, weekly, monthly limits (abuse prevention)
- Per-IP connection limit
- Max test duration

New CLI flags:
  --monthly-quota, --ip-daily, --ip-weekly, --ip-monthly

User management subcommands:
  btest-server-pro useradd <user> <pass>
  btest-server-pro userdel <user>
  btest-server-pro userlist
  btest-server-pro userset <user> --enabled true/false --daily N --weekly N

New DB tables: ip_usage (per-IP daily tracking)
New methods: get_monthly_usage, get_ip_*_usage, start/end_session,
  delete_user, set_user_enabled, set_user_quota

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 14:58:19 +04:00
Siavash Sameni
89391e1781 Add OpenWrt ipk packaging + split client/server binaries
Some checks failed
CI / test (push) Failing after 1m27s
OpenWrt package (deploy/openwrt/):
- build-ipk.sh: creates .ipk from pre-built binary (no SDK needed)
- Makefile: for OpenWrt SDK integration
- ProCD init script with UCI config
- Supports all architectures (x86_64, aarch64, mipsel, mips)

Split binaries for embedded (src/bin/):
- btest-client: client-only, no server/syslog/csv
- btest-server: server-only, no client
- release-small profile: opt-level=z + panic=abort

Sizes (compressed .tar.gz):
  Full btest:    ~1 MB
  btest-client:  ~500 KB (release-small)
  btest-server:  ~550 KB (release-small)

Install on OpenWrt:
  opkg install btest-rs_0.6.0-1_x86_64.ipk

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 14:44:57 +04:00
Siavash Sameni
d2fdc9c6ae Scaffold btest-server-pro: multi-user, quotas, LDAP
New binary `btest-server-pro` (build with --features pro):
  cargo build --release --features pro --bin btest-server-pro

Modules:
- server_pro/user_db.rs: SQLite user database with usage tracking
  - Users table (username, password_hash, quotas, enabled)
  - Usage table (daily bytes per user)
  - Sessions table (per-connection tracking)
- server_pro/quota.rs: bandwidth quota enforcement
  - Per-user daily/weekly limits
  - Per-IP connection limits
  - Max test duration
- server_pro/ldap_auth.rs: LDAP/AD authentication via ldap3
  - Simple bind authentication
  - Service account search for user DN

CLI flags: --users-db, --ldap-url, --ldap-base-dn, --ldap-bind-dn,
  --ldap-bind-pass, --daily-quota, --weekly-quota, --max-conn-per-ip,
  --max-duration

Binary sizes: btest=1.8MB, btest-server-pro=3.4MB (SQLite bundled)
Standard btest binary unchanged, 58 tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 14:33:36 +04:00
Siavash Sameni
8c853c3605 Parallel agent work: bandwidth fix, CPU platforms, packaging
All checks were successful
CI / test (push) Successful in 2m8s
5 agents ran in parallel:

1. Fix bandwidth limit (-b): new advance_next_send() prevents drift
   bursts by resetting when >2x interval behind (bandwidth.rs, client.rs, server.rs)

2. Windows + FreeBSD CPU support (cpu.rs):
   - Windows: GetSystemTimes via raw FFI
   - FreeBSD: sysctl kern.cp_time parsing

3. Ubuntu .deb packaging (deploy/deb/):
   - build-deb.sh: creates .deb from pre-built binary
   - test-deb.sh: tests in Ubuntu Docker container

4. Fedora/RHEL RPM packaging (deploy/rpm/):
   - btest-rs.spec: full RPM spec with systemd unit
   - build-rpm.sh + test-rpm.sh

5. Alpine Linux apk packaging (deploy/alpine/):
   - APKBUILD with OpenRC init script
   - test-alpine.sh

58 tests pass, zero warnings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 14:04:00 +04:00
Siavash Sameni
fe28c04c19 Simplify AUR test: use yay like a real user
All checks were successful
CI / test (push) Successful in 2m7s
Instead of manually setting up rust + makepkg, install yay first
then `yay -S btest-rs --noconfirm` — exactly how a user would.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 13:51:02 +04:00
Siavash Sameni
66be99bef0 Add remote AUR test script
All checks were successful
CI / test (push) Successful in 2m9s
scripts/test-aur-remote.sh: SSHes to a remote x86_64 server, spins up
an Arch Docker container, installs btest-rs from AUR, runs TCP + UDP
loopback tests, and cleans up.

Usage: ./scripts/test-aur-remote.sh root@myserver

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 13:48:16 +04:00
Siavash Sameni
94b122ac25 Add AUR package (PKGBUILD) with systemd service and test script
All checks were successful
CI / test (push) Successful in 2m11s
- deploy/aur/PKGBUILD: builds from source, installs binary + man page + systemd unit
- deploy/aur/.SRCINFO: AUR metadata
- deploy/aur/test-aur.sh: tests PKGBUILD in Docker Arch container
- Supports x86_64, aarch64, armv7h architectures

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 13:33:55 +04:00
Siavash Sameni
a07158ed22 Fix sync-github-release: merge all checksums into one file
All checks were successful
CI / test (push) Successful in 2m9s
Merges separate .sha256 files (from macOS build) into the main
checksums-sha256.txt, adds missing checksums, deduplicates.
Added macOS to release notes table.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 11:44:18 +04:00
Siavash Sameni
1cd552d2dc Update Docker references to GHCR as primary registry
All checks were successful
CI / test (push) Successful in 2m9s
- docker-compose.yml: ghcr.io/manawenuz/btest-rs
- docs/docker.md: GHCR for pull/run examples, both registries documented
- README: GitHub + Gitea issue tracker links
- Version refs updated to 0.6.0

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 11:40:28 +04:00
Siavash Sameni
3af40cb275 Add RPi install docs, GHCR support, push-docker-all script
All checks were successful
CI / test (push) Successful in 2m10s
- README: Raspberry Pi install section with auto-detect architecture
- README: pre-built binary download section for all platforms
- Docker docs: dual registry (Gitea + GHCR)
- scripts/push-docker-all.sh: push to both registries in one command

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 11:29:48 +04:00
Siavash Sameni
f0a48092ed v0.6.0: CPU monitoring, CSV with CPU, docs update, cleanup
Some checks failed
CI / test (push) Failing after 1m27s
Build & Release / release (push) Successful in 3m17s
New in v0.6.0:
- CPU usage: local/remote shown per interval (cpu: 12%/33%)
- Warning indicator (!) when CPU > 70% on either side
- MikroTik CPU encoding: 0x80 | percentage in status byte 1
- CSV includes local_cpu_pct and remote_cpu_pct columns
- Status message format corrected to match MikroTik wire format:
  [type:1][cpu:1][00:2][seq:4 LE][bytes:4 LE]
- Removed btest-opensource submodule (fully reimplemented)
- Deleted research/ecsrp5 branch
- Updated all docs: architecture, user-guide, man page, protocol
- Version bumped to 0.6.0

58 tests, all passing. Zero warnings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 11:16:25 +04:00
Siavash Sameni
27354108fc Fix CPU reporting: MikroTik uses 0x80|pct encoding, add CPU to CSV
All checks were successful
CI / test (push) Successful in 2m9s
- MikroTik encodes CPU as 0x80 | percentage (high bit flag)
- Deserialize: mask with 0x7F and cap at 100
- Serialize: set high bit (0x80 | cpu) to match MikroTik format
- CSV now includes local_cpu_pct and remote_cpu_pct columns
- Both client and server write CPU to CSV

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 11:08:11 +04:00
Siavash Sameni
24f634170d Add CPU usage monitoring, remove btest-opensource submodule
All checks were successful
CI / test (push) Successful in 2m16s
CPU usage feature:
- New cpu.rs module: background sampler thread, cross-platform (macOS + Linux)
- Status message byte 1 now carries CPU load (0-100%), matching MikroTik format
- Status format corrected: [type][cpu][00][00][seq:4 LE][bytes:4 LE]
- Client and server exchange CPU in every status message
- Display format: "cpu: 40%/12%" (local/remote), "!" warning if > 70%
- Both client and server show local + remote CPU per interval
- Syslog TEST_END could include CPU averages (future enhancement)

Removed btest-opensource submodule — we've fully reimplemented the protocol
with EC-SRP5 auth, multi-connection, IPv6, syslog, CSV, and CPU monitoring.
The original project is still credited in LICENSE and README.

58 tests, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 10:53:00 +04:00
Siavash Sameni
10dd0c3835 Add KNOWN_ISSUES.md, update architecture docs
Some checks failed
CI / test (push) Failing after 1m32s
KNOWN_ISSUES.md documents:
- IPv6 UDP on macOS (ENOBUFS, server mode)
- macOS UDP send buffer saturation (first 2-3 seconds)
- Windows binaries untested
- IPv6 UDP on Linux untested
- EC-SRP5 occasional auth failure
- MikroTik speed adaptation staircase
- TCP multi-connection bandwidth reporting
- Bandwidth limit (-b) not fully effective
- Platform test matrix

Architecture docs updated with:
- Shared BandwidthState for timeout survival
- IPv6 socket handling details
- Complete file layout including tests, deploy, proto-test

54 tests, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 10:39:18 +04:00
Siavash Sameni
5bb224cb3b Remove test CSV from repo, add to gitignore
Some checks failed
CI / test (push) Failing after 1m31s
2026-04-01 10:30:56 +04:00
Siavash Sameni
68eb0c7f96 Add comprehensive integration tests (20 new tests, 54 total)
Some checks failed
CI / test (push) Has been cancelled
New test suite covers:
- TCP IPv4: send, receive, both
- UDP IPv4: send, receive, both
- TCP IPv6: send, receive, both
- UDP IPv6: send, receive, both
- MD5 authentication flow
- EC-SRP5 authentication flow
- EC-SRP5 wrong password rejection
- CSV file creation (client + server)
- Syslog event emission (AUTH_SUCCESS, TEST_START, TEST_END)
- BandwidthState record_interval and running flag

Each test starts a server, runs a client for 2 seconds, verifies
bytes transferred > 0, then cleans up.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 10:30:37 +04:00
Siavash Sameni
23db39a84e Add CSV output to server mode
All checks were successful
CI / test (push) Successful in 1m26s
Server now writes a CSV row for each completed test with peer IP,
protocol, direction, duration, avg speeds, bytes, and lost packets.

Verified on loopback: TX 35 Gbps, RX 51 Gbps captured in CSV.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 10:19:15 +04:00
Siavash Sameni
d19ad25a3c Fix client stats: shared BandwidthState survives timeout cancellation
All checks were successful
CI / test (push) Successful in 1m24s
The state is now created in main.rs and passed into run_client, so
when --duration timeout cancels the future, the stats are still
accessible via shared_state.summary(). CSV and syslog now show
real speeds and byte counts.

Verified: TCP loopback shows 32 Gbps in CSV output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 10:05:50 +04:00
Siavash Sameni
5b07a079fe Fix client CSV/syslog: return actual stats from run_client
All checks were successful
CI / test (push) Successful in 1m23s
run_client and sub-functions now return (tx_bytes, rx_bytes, lost, intervals).
BandwidthState::record_interval() called in both TCP and UDP client status
loops. CSV and syslog TEST_END now show real speeds and byte counts.

Also raised client UDP TX error threshold from 1000 to 50000 with
adaptive backoff matching the server.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 09:52:46 +04:00
Siavash Sameni
949c4908ad Add client syslog events, fix client UDP TX error threshold
All checks were successful
CI / test (push) Successful in 1m26s
- Client mode now emits TEST_START and TEST_END syslog events
- Client UDP TX threshold raised from 1000 to 50000 with adaptive backoff
  (matching server behavior) — prevents premature TX death on macOS
- Updated all docs (README, user-guide, architecture, protocol, docker)
- Added results.csv to gitignore

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 09:40:52 +04:00
Siavash Sameni
751a9d5f13 Add --duration, --csv, --quiet flags for automated testing
All checks were successful
CI / test (push) Successful in 1m27s
- --duration N: run client test for N seconds then exit
- --csv <file>: append results to CSV (creates with headers if new)
- --quiet/-q: suppress terminal output (for scripted/machine use)

CSV columns: timestamp, host, port, protocol, direction, duration_s,
  tx_avg_mbps, rx_avg_mbps, tx_bytes, rx_bytes, lost_packets, auth_type

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 09:30:58 +04:00
Siavash Sameni
ce01d514b2 Add speed/bytes/duration to syslog TEST_END events
All checks were successful
CI / test (push) Successful in 1m24s
TEST_END now includes: duration, avg TX/RX Mbps, total bytes, lost packets.
All test functions track cumulative totals via BandwidthState::record_interval()
and return summary stats.

Example:
  TEST_END peer=172.16.81.1:59070 proto=UDP dir=TX duration=6s
    tx_avg=275.00Mbps rx_avg=0.00Mbps tx_bytes=206250000 rx_bytes=0 lost=0

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 09:23:11 +04:00
Siavash Sameni
7bc54a977c Fix syslog-ng filter: match on MESSAGE not program()
All checks were successful
CI / test (push) Successful in 1m29s
With flags(no-parse) on the source, syslog-ng doesn't extract
the program name. Use match("btest-rs:" value("MESSAGE")) instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 08:56:48 +04:00
Siavash Sameni
a925a7778d Fix syslog format + add syslog-ng config
All checks were successful
CI / test (push) Successful in 1m30s
- Syslog now uses RFC 3164 (BSD) format with proper timestamps
  and facility=local0 for easy filtering
- Added deploy/syslog-ng-btest.conf with filters for:
  - All btest events (all.log + daily rotation)
  - Auth events only (auth.log)
  - Test events only (tests.log)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 08:48:35 +04:00
Siavash Sameni
a28fc1dc08 v0.5.0: IPv6 off by default, mark as experimental
All checks were successful
CI / test (push) Successful in 1m25s
Build & Release / release (push) Successful in 3m0s
IPv6 listener now requires explicit --listen6 flag (disabled by default).
TCP over IPv6 works fully. UDP over IPv6 has macOS kernel limitations
(ENOBUFS on send_to). On Linux, IPv6 UDP works fine.

Usage:
  btest -s                    # IPv4 only (default)
  btest -s --listen6          # IPv4 + IPv6 on ::
  btest -s --listen6 ::1      # IPv4 + IPv6 on specific address

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 20:54:53 +04:00
Siavash Sameni
29643e7589 Revert: always report rx_bytes in UDP status, not tx_bytes
All checks were successful
CI / test (push) Successful in 1m27s
Reporting tx_bytes in TX-only mode caused MikroTik to show speed on
the wrong side (Tx instead of Rx). MikroTik tracks its own Rx by
counting UDP arrivals — the status bytes_received is for the OTHER
direction (how much we received from the client).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 19:52:14 +04:00
Siavash Sameni
0c14e6cf5b Fix UDP TX-only status: report tx_bytes instead of rx_bytes
All checks were successful
CI / test (push) Successful in 1m26s
Build & Release / release (push) Successful in 3m8s
In TX-only mode (MikroTik receives), we sent rx_bytes=0 in status
because we weren't receiving anything. But MikroTik client needs
to see non-zero bytes in the status to know data is flowing.
Now report tx_bytes when in TX-only mode.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 19:39:28 +04:00
Siavash Sameni
b8fa6d4580 Fix IPv6 UDP server TX: use connected socket for single-connection
All checks were successful
CI / test (push) Successful in 1m27s
pcap analysis proved: connected send() achieves 462k pps on IPv6,
while unconnected send_to() hits ENOBUFS at 5k pps then stalls.

Reverted the "always unconnected for IPv6" workaround. Now only
multi-connection mode uses unconnected sockets. Single-connection
always connects, which works for both IPv4 and IPv6 TX and RX.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 19:28:42 +04:00
Siavash Sameni
6288fe9f25 Fix IPv6 UDP TX: reset consecutive_errors after yield, pace every 16 pkts
All checks were successful
CI / test (push) Successful in 1m27s
ENOBUFS hits every send on macOS IPv6 because the interface output queue
is full. The adaptive backoff never recovered because consecutive_errors
never reset. Now reset after sleeping, and yield more frequently (every
16 packets instead of 64).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 19:13:46 +04:00
Siavash Sameni
50c0ba528d Fix IPv6 UDP: send NDP probe before data to populate neighbor cache
All checks were successful
CI / test (push) Successful in 1m26s
macOS returns ENOBUFS on IPv6 send_to() until NDP neighbor resolution
completes. Send a 1-byte probe packet and wait 200ms for NDP to resolve
before starting the data blast.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 19:06:41 +04:00
Siavash Sameni
4e3b2939ca Fix IPv6 UDP buffers: create socket with socket2 before tokio
All checks were successful
CI / test (push) Successful in 1m27s
The into_std/from_std conversion lost the buffer settings. Now create
the raw socket with socket2 first, set SO_SNDBUF/SO_RCVBUF to 4MB,
then wrap with tokio. Also logs actual buffer sizes for debugging.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 19:04:27 +04:00
Siavash Sameni
6ba57864a0 Fix IPv6 UDP TX: enlarge socket buffers to 4MB
All checks were successful
CI / test (push) Successful in 1m25s
macOS IPv6 UDP sockets have tiny default send buffers, causing
immediate ENOBUFS on every send_to(). Set SO_SNDBUF and SO_RCVBUF
to 4MB using socket2, matching what works for high-throughput IPv4.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 19:01:50 +04:00
Siavash Sameni
a1dbc6dc5a Fix client IPv6 UDP: use SocketAddr::new() and bind to [::]
All checks were successful
CI / test (push) Successful in 1m26s
Build & Release / release (push) Successful in 3m10s
Same fix as server side — format!("{}:{}", ipv6, port) fails.
Use SocketAddr::new() for IPv6 and bind to [::] instead of 0.0.0.0.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:54:45 +04:00
Siavash Sameni
7be6a0d541 Fix IPv6 UDP TX: adaptive backoff on ENOBUFS
All checks were successful
CI / test (push) Successful in 1m28s
Build & Release / release (push) Successful in 3m10s
IPv6 UDP sends hit ENOBUFS much faster than IPv4 (smaller kernel
buffers, NDP overhead). Fixed:
- Adaptive backoff: 200us→10ms as errors accumulate, resets on success
- Higher error threshold: 50k instead of 1k before stopping
- Yield with sleep when errors have been seen recently

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:45:04 +04:00
Siavash Sameni
ba0a8f1b7c Add UDP TX error logging for IPv6 debugging
All checks were successful
CI / test (push) Successful in 1m27s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:41:57 +04:00
Siavash Sameni
176cdae239 Fix IPv6 UDP: use unconnected socket for IPv6 peers
All checks were successful
CI / test (push) Successful in 1m28s
macOS connected IPv6 UDP sockets don't receive properly.
Use unconnected socket (send_to/recv_from) for IPv6 peers,
same as multi-connection mode.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:37:43 +04:00
Siavash Sameni
0385d2e745 Fix IPv6 UDP: use SocketAddr::new() and bind correct address family
All checks were successful
CI / test (push) Successful in 1m23s
format!("{}:{}", ipv6_addr, port) produces invalid socket address.
Use SocketAddr::new() instead. Also bind UDP to [::] for IPv6 peers
and 0.0.0.0 for IPv4 peers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:31:22 +04:00
Siavash Sameni
7bbb7c9d9b Add dual-stack IPv4+IPv6 listening
All checks were successful
CI / test (push) Successful in 1m24s
Server now binds on both IPv4 (0.0.0.0) and IPv6 (::) by default.
Uses tokio::select! to accept from whichever listener has a connection.

New flags:
  --listen <addr>   IPv4 listen address (default: 0.0.0.0, "none" to disable)
  --listen6 <addr>  IPv6 listen address (default: ::, "none" to disable)

Examples:
  btest -s                          # listen on both v4 and v6
  btest -s --listen6 none           # IPv4 only
  btest -s --listen none            # IPv6 only
  btest -s --listen 192.168.1.1     # specific IPv4 address

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:28:48 +04:00
Siavash Sameni
2dec6cc007 v0.5.0: Add syslog support, fix TCP send/both, EC-SRP5 server auth
All checks were successful
CI / test (push) Successful in 1m22s
New features:
- --syslog <address:port> sends structured events to remote syslog (RFC 5424 UDP)
  Events: AUTH_SUCCESS, AUTH_FAILURE, TEST_START, TEST_END, TEST_RESULT
- EC-SRP5 authentication for both client and server modes
- TCP multi-connection support (session tokens, all 3 directions)

Bug fixes since v0.2.0:
- EC-SRP5 server: fixed gamma parity (was 50% auth failure rate)
- EC-SRP5 server: use lift_x not redp1 for verification
- TCP send direction: server sends 12-byte status messages to client
- TCP both direction: TX loop injects status between data packets
- TCP data: send all zeros (no 0x07 header that MikroTik rejected)
- TCP disconnect detection: running flag set on EOF
- UDP multi-connection: unconnected socket accepts all source ports

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:22:31 +04:00
Siavash Sameni
f9289cca55 Add TCP status for bidirectional mode
All checks were successful
CI / test (push) Successful in 1m22s
In BOTH direction, the TX loop now injects 12-byte status messages
every 1 second between data packets, reporting rx_bytes to the client.
Multi-connection mode also updated with same logic for all 3 cases:
- TX only: pure data
- RX only: status sender on writer
- BOTH: TX data + interleaved status messages

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:10:46 +04:00
Siavash Sameni
8b127d833f Fix TCP status: use swap(0) and skip status_report_loop in RX mode
All checks were successful
CI / test (push) Successful in 1m24s
The status sender and status_report_loop were BOTH calling swap(0)
on rx_bytes, racing each other. Now the status sender owns the swap
and prints stats itself. The report loop is skipped in RX-only TCP mode.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 18:03:28 +04:00
Siavash Sameni
cdad23ffa0 Fix TCP status: report delta bytes per interval, not cumulative
All checks were successful
CI / test (push) Successful in 1m20s
Was sending cumulative rx_bytes total which zigzagged because
MikroTik interprets the value as per-interval bandwidth.
Now tracks last_rx and sends (current - last) delta each second.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:58:58 +04:00
Siavash Sameni
51bc4ddf16 Fix TCP send: server sends 12-byte status messages when receiving
All checks were successful
CI / test (push) Successful in 1m19s
pcap of MikroTik-as-server showed it sends periodic 12-byte status
messages back to the client even in RX-only mode. The client needs
these to display speed. Added tcp_status_sender that writes status
messages containing rx_bytes on the TCP write half every 1 second.

Reverted the "always bidirectional" change — TCP direction is
conditional, but RX mode now uses the writer for status instead
of keeping it idle.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:56:36 +04:00
Siavash Sameni
fa4fd63fb3 Fix TCP: always send bidirectional data regardless of direction
All checks were successful
CI / test (push) Successful in 1m21s
MITM capture of MikroTik-to-MikroTik showed both sides always send
zero-filled TCP streams, regardless of the direction setting. Direction
only controls what gets measured. Our server wasn't starting a TX thread
when direction=RX, so MikroTik saw no data and reported 0 speed.

Now TCP always starts both TX and RX on every connection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:46:16 +04:00
Siavash Sameni
d8f3b9c189 Fix TCP data: send all zeros, not 0x07 header
All checks were successful
CI / test (push) Successful in 1m20s
MITM capture showed MikroTik sends all-zero TCP data streams.
Our server was setting packet[0]=0x07 (STATUS_MSG_TYPE), which
MikroTik rejected. TCP mode has no status headers — just raw
zero-filled data streams in both directions.

Fixed in both server and client TCP TX loops.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:39:12 +04:00
Siavash Sameni
9552cbef1a Fix disconnect detection: TX/RX loops set running=false on EOF
All checks were successful
CI / test (push) Successful in 1m20s
When a TCP connection closes (EOF or write error), the loop now sets
the shared running flag to false, which stops the status report loop
and all other tasks. Adds "test ended" log messages.

The TCP multi-conn "MikroTik shows 0 on send" is a separate issue
requiring TCP-level status exchange (MikroTik sends 12-byte status
messages on TCP connections, not just a data stream).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:30:16 +04:00
Siavash Sameni
6c82228dd1 Fix EC-SRP5 server: use stored gamma parity, not hardcoded true
All checks were successful
CI / test (push) Successful in 1m21s
The gamma point's y-parity depends on the random salt. Using hardcoded
parity=true caused ~50% of auth attempts to fail (whenever the actual
parity was 0). Now stored from key derivation and used correctly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:22:06 +04:00
Siavash Sameni
a87dd7510f Fix TCP multi-connection: TX/RX on ALL streams, not just primary
All checks were successful
CI / test (push) Successful in 1m19s
pcap analysis showed MikroTik sends/receives data across all 20 TCP
connections, but we only used the primary. Now all streams get their
own TX and RX tasks, distributing bandwidth across all connections.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:17:00 +04:00
Siavash Sameni
b28c553e10 Fix EC-SRP5 server: use lift_x not redp1 for verification
All checks were successful
CI / test (push) Successful in 1m20s
Server-side shared secret used redp1(x_gamma) which is the hash-to-curve
blinding function, but verification needs lift_x(x_gamma) — the raw
validator public key point. Also fixed prime_mod_sqrt for p ≡ 5 (mod 8)
using Atkin's algorithm instead of Tonelli-Shanks.

Removed unused password parameter from server_authenticate.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:10:31 +04:00
Siavash Sameni
58274da859 Add EC-SRP5 authentication (RouterOS >= 6.43)
All checks were successful
CI / test (push) Successful in 1m18s
Client: auto-detects 03 response and performs EC-SRP5 handshake
Server: --ecsrp5 flag enables Curve25519 Weierstrass EC-SRP5 auth
  btest -s -a admin -p password --ecsrp5

Protocol: [len][payload] framing (no 0x06 handler, unlike Winbox)
Crypto: Curve25519 in Weierstrass form, SHA256, SRP key exchange

Based on MarginResearch/mikrotik_authentication (Apache 2.0).
Verified against MikroTik RouterOS 7.x via MITM protocol analysis.

34 tests (10 unit, 6 EC-SRP5 integration, 8 base integration, 10 doc-tests).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:56:38 +04:00
60 changed files with 11298 additions and 325 deletions

View File

@@ -14,7 +14,7 @@ jobs:
- name: Install dependencies
run: |
apt-get update && apt-get install -y --no-install-recommends \
git curl jq ca-certificates zip \
git curl jq ca-certificates zip unzip \
musl-tools \
gcc-aarch64-linux-gnu \
gcc-arm-linux-gnueabihf \
@@ -23,7 +23,14 @@ jobs:
x86_64-unknown-linux-musl \
aarch64-unknown-linux-musl \
armv7-unknown-linux-musleabihf \
x86_64-pc-windows-gnu
x86_64-pc-windows-gnu \
aarch64-linux-android \
armv7-linux-androideabi
# Install Android NDK for cross-compilation
NDK_VER=r27c
curl -sL https://dl.google.com/android/repository/android-ndk-${NDK_VER}-linux.zip -o /tmp/ndk.zip
unzip -q /tmp/ndk.zip -d /opt && rm /tmp/ndk.zip
export ANDROID_NDK_HOME=/opt/android-ndk-${NDK_VER}
- name: Ensure code is present
run: |
@@ -47,6 +54,12 @@ jobs:
[target.x86_64-pc-windows-gnu]
linker = "x86_64-w64-mingw32-gcc"
[target.aarch64-linux-android]
linker = "/opt/android-ndk-r27c/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android35-clang"
[target.armv7-linux-androideabi]
linker = "/opt/android-ndk-r27c/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi35-clang"
TOML
- name: Build Linux x86_64
@@ -61,6 +74,12 @@ jobs:
- name: Build Windows x86_64
run: cargo build --release --target x86_64-pc-windows-gnu
- name: Build Android aarch64 (ARMv8)
run: cargo build --release --target aarch64-linux-android
- name: Build Android armv7 (ARMv7)
run: cargo build --release --target armv7-linux-androideabi
- name: Package all
run: |
mkdir -p /artifacts
@@ -81,6 +100,14 @@ jobs:
zip /artifacts/btest-windows-x86_64.zip btest.exe
cd -
cd target/aarch64-linux-android/release
tar czf /artifacts/btest-android-aarch64.tar.gz btest
cd -
cd target/armv7-linux-androideabi/release
tar czf /artifacts/btest-android-armv7.tar.gz btest
cd -
cd /artifacts
sha256sum * > checksums-sha256.txt
cat checksums-sha256.txt
@@ -103,6 +130,8 @@ jobs:
| Linux | aarch64 (RPi 64-bit) | btest-linux-aarch64.tar.gz |
| Linux | armv7 (RPi 32-bit) | btest-linux-armv7.tar.gz |
| Windows | x86_64 | btest-windows-x86_64.zip |
| Android | aarch64 (ARMv8, Termux) | btest-android-aarch64.tar.gz |
| Android | armv7 (ARMv7, Termux) | btest-android-armv7.tar.gz |
| macOS | aarch64 / x86_64 | Run \`scripts/build-macos-release.sh --upload ${TAG}\` |
| Docker | x86_64 | \`docker pull ${REGISTRY}/manawenuz/btest-rs:${TAG}\` |

4
.gitignore vendored
View File

@@ -3,3 +3,7 @@
btest_original
.claude/
.env
proto-test/venv/
**/__pycache__/
results.csv
server_results.csv

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "btest-opensource"]
path = btest-opensource
url = https://github.com/samm-git/btest-opensource

1556
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,9 @@
[package]
name = "btest-rs"
version = "0.1.0"
version = "0.6.0"
edition = "2021"
description = "MikroTik Bandwidth Test (btest) server and client — a Rust reimplementation"
license = "MIT"
description = "MikroTik Bandwidth Test (btest) server and client with EC-SRP5 auth — a Rust reimplementation"
license = "MIT AND Apache-2.0"
repository = "https://github.com/samm-git/btest-opensource"
keywords = ["mikrotik", "bandwidth", "btest", "network", "benchmarking"]
categories = ["command-line-utilities", "network-programming"]
@@ -16,6 +16,23 @@ path = "src/lib.rs"
name = "btest"
path = "src/main.rs"
[[bin]]
name = "btest-client"
path = "src/bin/client_only.rs"
[[bin]]
name = "btest-server"
path = "src/bin/server_only.rs"
[[bin]]
name = "btest-server-pro"
path = "src/server_pro/main.rs"
required-features = ["pro"]
[features]
default = []
pro = ["dep:rusqlite", "dep:ldap3", "dep:axum", "dep:tower-http", "dep:serde", "dep:serde_json", "dep:askama"]
[dependencies]
tokio = { version = "1", features = ["full"] }
clap = { version = "4", features = ["derive"] }
@@ -27,9 +44,27 @@ tracing-subscriber = { version = "0.3", features = ["env-filter"] }
rand = "0.8"
socket2 = "0.5"
anyhow = "1.0.102"
num-bigint = "0.4.6"
num-traits = "0.2.19"
num-integer = "0.1.46"
sha2 = "0.11.0"
hostname = "0.4.2"
rusqlite = { version = "0.39.0", features = ["bundled"], optional = true }
ldap3 = { version = "0.12.1", optional = true }
axum = { version = "0.8.8", features = ["tokio"], optional = true }
tower-http = { version = "0.6.8", features = ["fs", "cors"], optional = true }
serde = { version = "1.0.228", features = ["derive"], optional = true }
serde_json = { version = "1.0.149", optional = true }
askama = { version = "0.15.6", optional = true }
[profile.release]
opt-level = 3
lto = true
strip = true
codegen-units = 1
# Minimal size profile for embedded/OpenWrt targets
[profile.release-small]
inherits = "release"
opt-level = "z"
panic = "abort"

125
KNOWN_ISSUES.md Normal file
View File

@@ -0,0 +1,125 @@
# Known Issues
This document tracks known limitations, bugs, and platform-specific issues in btest-rs. If you encounter an issue not listed here, please report it at: **https://git.manko.yoga/manawenuz/btest-rs/issues**
## IPv6 UDP on macOS (Server Mode)
**Severity:** High
**Affects:** macOS only (server mode, UDP, IPv6)
**Status:** Open
When running as a server on macOS and a MikroTik client connects over IPv6 UDP, the server's UDP transmit hits `ENOBUFS` (error 55 — "No buffer space available") repeatedly. This causes:
- Direction "receive" (server TX): intermittent packet bursts with gaps, MikroTik shows unstable or low speed
- Direction "send" (server RX): works, but speed drops over time due to MikroTik's speed adaptation receiving irregular status feedback
- Direction "both": TX side severely degraded
**Root cause:** macOS kernel returns `ENOBUFS` on IPv6 `send_to()` much more aggressively than IPv4 due to smaller interface output queues and per-packet NDP overhead. Connected sockets (`send()`) perform better than unconnected (`send_to()`), but still hit limits under high throughput.
**Workaround:** Use IPv4 for UDP tests on macOS, or deploy the server on Linux where IPv6 UDP works correctly.
**Not affected:**
- IPv4 UDP (all directions, all platforms)
- IPv6 TCP (all directions, all platforms)
- Client mode over IPv6 (connecting TO a MikroTik server works fine at 600+ Mbps)
## IPv6 UDP — Not Tested on Linux
**Severity:** Unknown
**Affects:** Linux server, IPv6, UDP
**Status:** Untested
IPv6 UDP in server mode has not been thoroughly tested on Linux. The macOS ENOBUFS issue is kernel-specific and likely does not exist on Linux (which has much better IPv6 UDP buffer management). Testing and reports welcome.
## macOS UDP Send Buffer Saturation
**Severity:** Medium
**Affects:** macOS (client and server, IPv4 and IPv6, UDP)
**Status:** Mitigated
On macOS, when sending UDP at unlimited speed, the kernel buffer fills quickly and returns `ENOBUFS`. The adaptive backoff mechanism (200μs → 10ms) mitigates this, but the first few seconds of a test may show:
- Interval 1: high burst (40-300 Mbps depending on conditions)
- Interval 2: 0 bps (buffer full, backoff in effect)
- Interval 3+: gradually recovers to steady state
This causes the first 2-3 seconds of UDP tests to be unreliable on macOS. On Linux, this issue does not occur.
**Workaround:** Ignore the first few seconds of results, or use TCP mode which does not have this issue.
## Windows Binaries Not Tested
**Severity:** Unknown
**Affects:** Windows x86_64
**Status:** Untested
Windows binaries are cross-compiled from Linux using `gcc-mingw-w64` in CI. They have never been tested on actual Windows systems. Issues may include:
- Socket behavior differences (Winsock vs BSD sockets)
- IPv6 dual-stack handling
- Path separator issues in CSV output
- Console output encoding
**Help wanted:** If you test on Windows, please report your findings.
## EC-SRP5 Server Authentication — Occasional Failure
**Severity:** Low
**Affects:** Server mode with `--ecsrp5`
**Status:** Mostly fixed
EC-SRP5 server authentication occasionally fails with "client proof mismatch". This was largely fixed by storing the correct gamma parity from key derivation, but edge cases may still exist with certain salt/password combinations due to the Curve25519 Weierstrass arithmetic.
**Workaround:** Retry the connection. If it fails consistently, restart the server (which regenerates the salt).
## MikroTik Speed Adaptation Staircase (Server RX, UDP)
**Severity:** Low
**Affects:** Server mode, UDP, direction "send" (MikroTik sends to us)
**Status:** MikroTik client behavior
When MikroTik connects as a client and sends data (direction "send"), the speed may gradually decrease in a staircase pattern over 30-60 seconds. This is caused by MikroTik's client-side speed adaptation algorithm, not by our server.
The original C btest-opensource server exhibits the same behavior. Single-connection mode (`connection-count=1`) provides the best results.
## TCP Multi-Connection Bandwidth Reporting
**Severity:** Low
**Affects:** Server mode, TCP, `connection-count > 1`
**Status:** Open
With TCP multi-connection, the server correctly handles all connections and data flows, but bandwidth is only measured on the primary connection's status loop. MikroTik may show lower-than-actual speeds because status messages are not distributed across all connections.
## Bandwidth Limit (`-b`) Not Fully Effective
**Severity:** Low
**Affects:** Client mode, `-b` flag
**Status:** Open
The `-b` bandwidth limit flag does not reliably cap speed. The `calc_send_interval` function computes the inter-packet delay correctly, but tokio's timer resolution and task scheduling can cause actual throughput to exceed the specified limit, especially for high bandwidth values.
---
## Reporting Issues
Found a bug or unexpected behavior? Please report it:
- **Issue tracker:** https://git.manko.yoga/manawenuz/btest-rs/issues
- **Include:** OS/platform, btest-rs version (`btest --version`), MikroTik RouterOS version, protocol (TCP/UDP), direction, connection count, and the full command line used.
- **Packet captures:** If possible, attach a tcpdump/pcap capture. Use: `sudo tcpdump -i <interface> -w capture.pcap -s 200 'host <mikrotik_ip> and (port 2000 or portrange 2001-2356)'`
- **Debug logs:** Run with `-vv` to get hex-level status exchange dumps.
## Platform Test Matrix
| Platform | TCP4 | UDP4 | TCP6 | UDP6 | Notes |
|----------|------|------|------|------|-------|
| macOS (ARM64) | Pass | Pass* | Pass | Fail** | *UDP send buffer saturation on first seconds |
| macOS (x86_64) | Untested | Untested | Untested | Untested | |
| Linux (x86_64) | Pass | Pass | Pass | Untested | Deployed on Ubuntu 24.04 |
| Linux (aarch64) | Untested | Untested | Untested | Untested | RPi builds available |
| Linux (armv7) | Untested | Untested | Untested | Untested | RPi builds available |
| Windows (x86_64) | Untested | Untested | Untested | Untested | Cross-compiled, never tested |
**Pass** = verified against MikroTik RouterOS 7.x
**Fail** = known issue documented above
**Untested** = builds available but not verified

13
LICENSE
View File

@@ -3,7 +3,11 @@ MIT License
Copyright (c) 2026 btest-rs contributors
Based on btest-opensource by Alex Samorukov (https://github.com/samm-git/btest-opensource)
Original work Copyright (c) 2016 Alex Samorukov
Original work Copyright (c) 2016 Alex Samorukov (MIT License)
EC-SRP5 authentication based on research by Margin Research
(https://github.com/MarginResearch/mikrotik_authentication)
Original work Copyright (c) 2022 Margin Research (Apache License 2.0)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -22,3 +26,10 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
---
NOTICE: This project includes code derived from works under the Apache License 2.0.
The EC-SRP5 elliptic curve implementation is based on MarginResearch/mikrotik_authentication.
See https://github.com/MarginResearch/mikrotik_authentication/blob/master/LICENSE
for the full Apache 2.0 license text.

260
README.md
View File

@@ -1,20 +1,42 @@
# btest-rs
A Rust reimplementation of the [MikroTik Bandwidth Test (btest)](https://wiki.mikrotik.com/wiki/Manual:Tools/Bandwidth_Test) protocol. Both server and client modes, compatible with MikroTik RouterOS devices.
A Rust reimplementation of the [MikroTik Bandwidth Test (btest)](https://wiki.mikrotik.com/wiki/Manual:Tools/Bandwidth_Test) protocol. Both server and client modes, fully compatible with MikroTik RouterOS devices.
## Based on
## Free Public Servers
This project is a clean-room Rust reimplementation based on the protocol reverse-engineering work done by **Alex Samorukov** in [btest-opensource](https://github.com/samm-git/btest-opensource). The original C implementation and protocol documentation were invaluable in making this project possible. Full credit to Alex and all contributors to that project.
Test your MikroTik link speed right now — no setup, no registration:
The original `btest-opensource` project is included as a git submodule for reference and protocol documentation.
| Server | Location | Dashboard |
|--------|----------|-----------|
| `104.225.217.60` | US | [btest.home.kg](https://btest.home.kg) |
| `188.245.59.196` | EU | [btest.mikata.ru](https://btest.mikata.ru) |
## Why Rust?
```
/tool bandwidth-test address=104.225.217.60 user=btest password=btest protocol=tcp direction=both
```
- **Single static binary** - 2 MB, zero dependencies, runs anywhere
- **Cross-platform** - macOS, Linux (x86_64, ARM64), Docker
- **Async I/O** - tokio-based, handles many concurrent connections efficiently
- **Memory safe** - no buffer overflows, no use-after-free, no data races
- **Easy deployment** - `scp` one file, done. Or use the systemd installer.
After the test, visit `https://btest.home.kg/dashboard/YOUR_IP` to see your results, throughput history, and quota usage. Per-IP limits: 2 GB daily / 8 GB weekly / 24 GB monthly.
> **Note:** TCP is recommended for remote testing. UDP bidirectional through NAT will only show one direction — this is a btest protocol limitation, not specific to btest-rs. See [KNOWN_ISSUES.md](KNOWN_ISSUES.md) for details.
Want to run your own public server? Build with `cargo build --release --features pro` — see [Server Pro](#server-pro) below.
## Features
- **Full protocol support** -- TCP and UDP data transfer, IPv4 and IPv6
- **EC-SRP5 authentication** -- modern RouterOS >= 6.43 Curve25519-based auth (server and client)
- **MD5 authentication** -- legacy RouterOS < 6.43 challenge-response auth
- **Multi-connection support** -- handles MikroTik's multi-connection UDP mode
- **Bidirectional testing** -- simultaneous upload and download
- **Syslog logging** -- send structured events (auth, test start/end) to a remote syslog server
- **CSV output** -- append machine-readable test results to a CSV file
- **CPU usage monitoring** -- local and remote CPU shown per interval, warning at >70%
- **Timed tests** -- `--duration` flag to automatically stop after N seconds
- **Quiet mode** -- suppress terminal output for scripted/automated use
- **NAT traversal** -- probe packet to open firewall holes for UDP receive
- **Single static binary** -- ~2 MB, zero runtime dependencies (musl build)
- **Cross-platform** -- macOS, Linux (x86_64, ARM64, ARMv7), Windows, Android (Termux), Docker
- **Async I/O** -- tokio-based, handles many concurrent connections efficiently
## Performance
@@ -29,53 +51,106 @@ Tested over WiFi 6E (MikroTik RouterOS <-> macOS):
| Client TCP bidirectional | TCP | **264/264 Mbps** |
| Server bidirectional | UDP | **280/393 Mbps** |
On wired gigabit links, expect line-rate performance in both TCP and UDP modes.
## Installation
### Pre-built binary
```bash
# Build for Linux x86_64 from macOS (requires Docker)
scripts/build-linux.sh
# Copy to server
scp dist/btest root@yourserver:/usr/local/bin/btest
```
### From source
```bash
cargo install --path .
```
### Pre-built binaries
Download from [releases](https://git.manko.yoga/manawenuz/btest-rs/releases) or [GitHub releases](https://github.com/manawenuz/btest-rs/releases):
```bash
# Linux x86_64
curl -L <release-url>/btest-linux-x86_64.tar.gz | tar xz
sudo mv btest /usr/local/bin/
# Raspberry Pi 4/5 (64-bit OS)
curl -L <release-url>/btest-linux-aarch64.tar.gz | tar xz
sudo mv btest /usr/local/bin/
# Raspberry Pi 3/Zero 2 (32-bit OS)
curl -L <release-url>/btest-linux-armv7.tar.gz | tar xz
sudo mv btest /usr/local/bin/
# Windows
# Download btest-windows-x86_64.zip from releases
# Android (Termux, no root needed)
curl -L <release-url>/btest-android-aarch64.tar.gz | tar xz
mv btest $PREFIX/bin/
```
### Raspberry Pi
The static musl binaries run on any Raspberry Pi without dependencies:
```bash
# On the Pi — detect architecture and install
ARCH=$(uname -m)
case $ARCH in
aarch64) FILE=btest-linux-aarch64.tar.gz ;;
armv7l) FILE=btest-linux-armv7.tar.gz ;;
*) echo "Unsupported: $ARCH"; exit 1 ;;
esac
curl -LO "https://github.com/manawenuz/btest-rs/releases/latest/download/$FILE"
tar xzf "$FILE"
sudo mv btest /usr/local/bin/
rm "$FILE"
# Run as server
btest -s -a admin -p password --ecsrp5
# Or install as systemd service
curl -LO https://raw.githubusercontent.com/manawenuz/btest-rs/main/scripts/install-service.sh
sudo bash install-service.sh --auth-user admin --auth-pass password
```
### Docker
```bash
docker compose up -d # Server on port 2000
docker compose up -d
```
See [docs/docker.md](docs/docker.md) for full Docker and deployment options.
### systemd service
```bash
# On the target Linux server:
sudo ./scripts/install-service.sh
sudo ./scripts/install-service.sh --auth-user admin --auth-pass secret
sudo ./scripts/install-service.sh --auth-user admin --auth-pass secret --port 2000
```
## Usage
The installer creates a dedicated `btest` system user, installs a hardened systemd unit, and enables the service.
## Quick Start
### Server mode
MikroTik devices connect to this server to run bandwidth tests.
```bash
# Basic server (no auth)
# No authentication
btest -s
# With authentication
# MD5 authentication (legacy RouterOS)
btest -s -a admin -p password
# Custom port with verbose logging
btest -s -P 2000 -v
# EC-SRP5 authentication (RouterOS >= 6.43)
btest -s -a admin -p password --ecsrp5
# Custom port, verbose logging
btest -s -P 3000 -v
# With syslog and CSV logging
btest -s -a admin -p password --syslog 192.168.1.1:514 --csv /var/log/btest.csv
```
### Client mode
@@ -89,24 +164,62 @@ btest -c 192.168.88.1 -r
# TCP upload test
btest -c 192.168.88.1 -t
# Bidirectional
# Bidirectional TCP
btest -c 192.168.88.1 -t -r
# UDP with bandwidth limit
# UDP download with bandwidth limit
btest -c 192.168.88.1 -r -u -b 100M
# With authentication
btest -c 192.168.88.1 -r -a admin -p password
# Timed test (30 seconds), results to CSV
btest -c 192.168.88.1 -r -d 30 --csv results.csv
# Quiet mode (no terminal output)
btest -c 192.168.88.1 -r -d 10 --csv results.csv -q
# UDP through NAT
btest -c 192.168.88.1 -r -u -n
```
### Debug logging
```bash
btest -s -v # info + debug
btest -s -vv # info + debug + trace (hex dumps of status exchange)
btest -s -v # debug messages
btest -s -vv # trace messages (hex dumps of status exchange)
btest -s -vvv # maximum verbosity
```
## MikroTik Setup
## CLI Reference
```
Usage: btest [OPTIONS]
Options:
-s, --server Run in server mode
-c, --client <HOST> Run in client mode, connect to HOST
-t, --transmit Client transmits data (upload test)
-r, --receive Client receives data (download test)
-u, --udp Use UDP instead of TCP
-b, --bandwidth <BW> Target bandwidth limit (e.g., 100M, 1G, 500K)
-P, --port <PORT> Listen/connect port [default: 2000]
--listen <ADDR> IPv4 listen address [default: 0.0.0.0] (use "none" to disable)
--listen6 [<ADDR>] Enable IPv6 listener [default: ::] (experimental)
-a, --authuser <USER> Authentication username
-p, --authpass <PASS> Authentication password
--ecsrp5 Use EC-SRP5 authentication (RouterOS >= 6.43)
-n, --nat NAT traversal mode (send UDP probe packet)
-d, --duration <SECS> Test duration in seconds (client mode, 0=unlimited) [default: 0]
--csv <FILE> Output results to CSV file (appends if file exists)
-q, --quiet Suppress terminal output (use with --csv)
--syslog <HOST:PORT> Send logs to remote syslog server (UDP, RFC 3164)
-v, --verbose Increase log verbosity (-v, -vv, -vvv)
-h, --help Show help
-V, --version Show version
```
## MikroTik Configuration
### Enable btest server on MikroTik (for client mode)
@@ -116,26 +229,57 @@ btest -s -vv # info + debug + trace (hex dumps of status exchange)
### Run btest from MikroTik (connecting to our server)
**Important: Set Connection Count to 1** — multi-connection mode is not supported.
```
/tool/bandwidth-test address=<server-ip> direction=both protocol=udp user=admin password=password connection-count=1
/tool/bandwidth-test address=<server-ip> direction=both protocol=udp \
user=admin password=password
```
## Protocol
The MikroTik btest protocol uses:
- **TCP port 2000** for control (handshake, auth, status exchange)
- **UDP ports 2001+** for data transfer
- **MD5 challenge-response** authentication (RouterOS < 6.43)
- **TCP port 2000** for control (handshake, authentication, status exchange)
- **UDP ports 2001+** for data transfer (server side)
- **UDP ports 2257+** for data transfer (client side, offset +256)
- **MD5 double-hash challenge-response** authentication (RouterOS < 6.43)
- **EC-SRP5 Curve25519 Weierstrass** authentication (RouterOS >= 6.43)
- **1-second status interval** with dynamic speed adjustment
See the [original protocol documentation](btest-opensource/README.md) for wire-format details.
See [docs/protocol.md](docs/protocol.md) for the full wire-format specification.
## Known Limitations
## Authentication
- **EC-SRP5 authentication** (RouterOS >= 6.43) is not yet supported for client mode. Server mode works fine with MD5 auth. Disable auth on the MikroTik btest server as a workaround.
- **Multi-connection UDP** is supported. MikroTik's multi-connection mode sends from multiple source ports which are all accepted by the server.
Both legacy and modern MikroTik authentication schemes are supported:
| Scheme | RouterOS Version | Flag |
|--------|-----------------|------|
| None | Any | (no flags) |
| MD5 challenge-response | < 6.43 | `-a USER -p PASS` |
| EC-SRP5 (Curve25519) | >= 6.43 | `-a USER -p PASS --ecsrp5` |
In server mode, `--ecsrp5` advertises EC-SRP5 to connecting clients. Without it, MD5 is advertised. In client mode, the authentication type is auto-detected from the server's response.
## Known Issues
See [KNOWN_ISSUES.md](KNOWN_ISSUES.md) for the full list including:
- **IPv6 UDP on macOS** — server TX hits ENOBUFS, use IPv4 or deploy on Linux
- **macOS UDP send buffer** — first 2-3 seconds unreliable on unlimited speed tests
- **Windows binaries** — cross-compiled but untested
- **IPv6 UDP on Linux** — untested, likely works fine
Contributions and bug reports welcome:
- https://github.com/manawenuz/btest-rs/issues
- https://git.manko.yoga/manawenuz/btest-rs/issues
## Documentation
- [User Guide](docs/user-guide.md) -- complete CLI reference with examples for every mode
- [Architecture](docs/architecture.md) -- module structure, threading model, design decisions
- [Protocol Specification](docs/protocol.md) -- wire format, authentication, status exchange
- [Docker & Deployment](docs/docker.md) -- Docker, Docker Compose, systemd, firewall rules
- [EC-SRP5 Research](docs/ecsrp5-research.md) -- reverse-engineering notes and cryptographic details
- [Man Page](docs/man/btest.1) -- Unix manual page (install to `/usr/share/man/man1/`)
## Testing
@@ -146,13 +290,37 @@ scripts/test-mikrotik.sh <ip> # Test against MikroTik device
scripts/test-docker.sh # Docker container test
```
## Server Pro
An optional superset of the standard server with multi-user support, quotas, and a web dashboard. Build with `--features pro`:
```bash
cargo build --release --features pro --bin btest-server-pro
```
Features:
- **SQLite user database** — add/remove users, per-user quotas
- **Per-IP bandwidth quotas** — daily, weekly, monthly limits with inline byte budget enforcement
- **Web dashboard** — session history, throughput stats, quota progress bars, JSON export
- **TCP multi-connection** — handles MikroTik's default 20-connection mode
- **MD5 auth against DB** — proper challenge-response verification
```bash
# Create a user and start the server
btest-server-pro --users-db users.db useradd btest btest
btest-server-pro --users-db users.db --ip-daily 2147483648 --ip-weekly 8589934592 --web-port 8080
```
The pro features are completely optional and don't affect the standard `btest` binary.
## Credits
- **[btest-opensource](https://github.com/samm-git/btest-opensource)** by [Alex Samorukov](https://github.com/samm-git) - Original C implementation and protocol reverse-engineering that made this project possible. Licensed under MIT.
- **MikroTik** - Creator of the bandwidth test protocol and RouterOS.
- **[btest-opensource](https://github.com/samm-git/btest-opensource)** by [Alex Samorukov](https://github.com/samm-git) -- original C implementation and protocol reverse-engineering. Licensed under **MIT**.
- **[Margin Research](https://github.com/MarginResearch/mikrotik_authentication)** -- EC-SRP5 authentication reverse-engineering (Curve25519 Weierstrass, SRP key exchange). Licensed under **Apache 2.0**.
- **MikroTik** -- creator of the bandwidth test protocol and RouterOS.
## License
MIT License - see [LICENSE](LICENSE).
MIT License -- see [LICENSE](LICENSE).
This project is derived from [btest-opensource](https://github.com/samm-git/btest-opensource) (MIT License, Copyright 2016 Alex Samorukov). The original license and copyright notice are preserved as required.
This project is derived from [btest-opensource](https://github.com/samm-git/btest-opensource) (MIT License, Copyright 2016 Alex Samorukov). The EC-SRP5 implementation is based on research by [Margin Research](https://github.com/MarginResearch/mikrotik_authentication) (Apache License 2.0). Original license and copyright notices are preserved as required.

Submodule btest-opensource deleted from 5040a01267

52
deploy/alpine/APKBUILD Normal file
View File

@@ -0,0 +1,52 @@
# Maintainer: Siavash Sameni <manwe at manko dot yoga>
pkgname=btest-rs
pkgver=0.6.0
pkgrel=0
pkgdesc="MikroTik Bandwidth Test server and client with EC-SRP5 auth"
url="https://github.com/manawenuz/btest-rs"
license="MIT AND Apache-2.0"
arch="x86_64 aarch64 armv7"
makedepends="cargo rust"
install="$pkgname.pre-install"
source="$pkgname-$pkgver.tar.gz::https://github.com/manawenuz/btest-rs/archive/refs/tags/v$pkgver.tar.gz
btest.initd
"
sha256sums="SKIP
SKIP
"
prepare() {
default_prepare
cd "$builddir"
cargo fetch --locked --target "$(rustc -vV | sed -n 's/host: //p')"
}
build() {
cd "$builddir"
export CARGO_TARGET_DIR=target
cargo build --frozen --release
}
check() {
cd "$builddir"
cargo test --frozen --release
}
package() {
cd "$builddir"
# binary
install -Dm755 "target/release/btest" "$pkgdir/usr/bin/btest"
# man page
install -Dm644 "docs/man/btest.1" "$pkgdir/usr/share/man/man1/btest.1"
# license
install -Dm644 "LICENSE" "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
# documentation
install -Dm644 "README.md" "$pkgdir/usr/share/doc/$pkgname/README.md"
# OpenRC init script
install -Dm755 "$srcdir/btest.initd" "$pkgdir/etc/init.d/btest"
}

37
deploy/alpine/btest.initd Executable file
View File

@@ -0,0 +1,37 @@
#!/sbin/openrc-run
# OpenRC init script for btest-rs
# MikroTik Bandwidth Test server
name="btest"
description="MikroTik Bandwidth Test Server (btest-rs)"
command="/usr/bin/btest"
command_args="-s"
command_background=true
pidfile="/run/$name.pid"
# Run as dedicated user if it exists, otherwise root
command_user="btest:btest"
# Logging
output_log="/var/log/$name/$name.log"
error_log="/var/log/$name/$name.err"
depend() {
need net
after firewall
use dns logger
}
start_pre() {
# Create log directory
checkpath -d -m 0755 -o "$command_user" /var/log/$name
# Create runtime directory
checkpath -d -m 0755 -o "$command_user" /run
}
stop() {
ebegin "Stopping $name"
start-stop-daemon --stop --pidfile "$pidfile" --retry TERM/5/KILL/3
eend $?
}

118
deploy/alpine/test-alpine.sh Executable file
View File

@@ -0,0 +1,118 @@
#!/bin/sh
# Test Alpine Linux packaging for btest-rs
# Runs inside an Alpine Docker container to build and verify the APK.
#
# Usage (from repository root):
# docker run --rm -v "$PWD":/src alpine:latest /src/deploy/alpine/test-alpine.sh
#
set -eu
ALPINE_DIR="/src/deploy/alpine"
echo "=== Alpine APK packaging test ==="
echo "Alpine version: $(cat /etc/alpine-release)"
# ── Install build dependencies ──────────────────────────────────────
echo "--- Installing build dependencies ---"
apk update
apk add --no-cache \
alpine-sdk \
rust \
cargo \
sudo
# ── Create a non-root build user (abuild refuses to run as root) ──
echo "--- Setting up build user ---"
adduser -D builder
addgroup builder abuild
echo "builder ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# ── Prepare build tree ──────────────────────────────────────────────
echo "--- Preparing build tree ---"
BUILD_DIR="/home/builder/btest-rs"
mkdir -p "$BUILD_DIR"
cp "$ALPINE_DIR/APKBUILD" "$BUILD_DIR/"
cp "$ALPINE_DIR/btest.initd" "$BUILD_DIR/"
# Generate signing key (required by abuild)
su builder -c "abuild-keygen -a -n -q"
sudo cp /home/builder/.abuild/*.rsa.pub /etc/apk/keys/
# ── Build the package ──────────────────────────────────────────────
echo "--- Building APK ---"
cd "$BUILD_DIR"
chown -R builder:builder "$BUILD_DIR"
su builder -c "abuild -r"
echo "--- Build succeeded ---"
# ── Locate and install the package ──────────────────────────────────
echo "--- Installing built APK ---"
APK_FILE=$(find /home/builder/packages -name "btest-rs-*.apk" -not -name "*doc*" | head -1)
if [ -z "$APK_FILE" ]; then
echo "FAIL: APK file not found"
exit 1
fi
echo "Found APK: $APK_FILE"
apk add --allow-untrusted "$APK_FILE"
# ── Verify installation ────────────────────────────────────────────
echo "--- Verifying installation ---"
FAIL=0
# Binary exists and is executable
if command -v btest >/dev/null 2>&1; then
echo "PASS: btest binary installed"
else
echo "FAIL: btest binary not found in PATH"
FAIL=1
fi
# Binary runs (show version / help)
if btest --help >/dev/null 2>&1; then
echo "PASS: btest --help exits successfully"
else
echo "FAIL: btest --help failed"
FAIL=1
fi
# Man page installed
if [ -f /usr/share/man/man1/btest.1 ]; then
echo "PASS: man page installed"
else
echo "FAIL: man page not found"
FAIL=1
fi
# License installed
if [ -f /usr/share/licenses/btest-rs/LICENSE ]; then
echo "PASS: LICENSE installed"
else
echo "FAIL: LICENSE not found"
FAIL=1
fi
# OpenRC init script installed
if [ -f /etc/init.d/btest ]; then
echo "PASS: OpenRC init script installed"
else
echo "FAIL: OpenRC init script not found"
FAIL=1
fi
# Init script is executable
if [ -x /etc/init.d/btest ]; then
echo "PASS: init script is executable"
else
echo "FAIL: init script is not executable"
FAIL=1
fi
# ── Summary ─────────────────────────────────────────────────────────
echo ""
if [ "$FAIL" -eq 0 ]; then
echo "=== All Alpine packaging tests PASSED ==="
else
echo "=== Some Alpine packaging tests FAILED ==="
exit 1
fi

15
deploy/aur/.SRCINFO Normal file
View File

@@ -0,0 +1,15 @@
pkgbase = btest-rs
pkgdesc = MikroTik Bandwidth Test (btest) server and client with EC-SRP5 auth
pkgver = 0.6.0
pkgrel = 1
url = https://github.com/manawenuz/btest-rs
arch = x86_64
arch = aarch64
arch = armv7h
license = MIT
license = Apache-2.0
makedepends = cargo
source = btest-rs-0.6.0.tar.gz::https://github.com/manawenuz/btest-rs/archive/refs/tags/v0.6.0.tar.gz
sha256sums = SKIP
pkgname = btest-rs

58
deploy/aur/PKGBUILD Normal file
View File

@@ -0,0 +1,58 @@
# Maintainer: Siavash Sameni <manwe at manko dot yoga>
pkgname=btest-rs
pkgver=0.6.0
pkgrel=1
pkgdesc="MikroTik Bandwidth Test (btest) server and client with EC-SRP5 auth"
arch=('x86_64' 'aarch64' 'armv7h')
url="https://github.com/manawenuz/btest-rs"
license=('MIT' 'Apache-2.0')
depends=()
makedepends=('cargo')
source=("$pkgname-$pkgver.tar.gz::https://github.com/manawenuz/btest-rs/archive/refs/tags/v$pkgver.tar.gz")
sha256sums=('SKIP')
prepare() {
cd "$pkgname-$pkgver"
export RUSTUP_TOOLCHAIN=stable
cargo fetch --locked --target "$(rustc -vV | sed -n 's/host: //p')"
}
build() {
cd "$pkgname-$pkgver"
export RUSTUP_TOOLCHAIN=stable
export CARGO_TARGET_DIR=target
cargo build --frozen --release
}
package() {
cd "$pkgname-$pkgver"
install -Dm755 "target/release/btest" "$pkgdir/usr/bin/btest"
install -Dm644 "LICENSE" "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
install -Dm644 "docs/man/btest.1" "$pkgdir/usr/share/man/man1/btest.1"
install -Dm644 "README.md" "$pkgdir/usr/share/doc/$pkgname/README.md"
# systemd service
install -Dm644 /dev/stdin "$pkgdir/usr/lib/systemd/system/btest.service" <<EOF
[Unit]
Description=MikroTik Bandwidth Test Server (btest-rs)
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/btest -s
Restart=always
RestartSec=5
DynamicUser=yes
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
}

40
deploy/aur/test-aur.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
# Test the PKGBUILD in a Docker Arch Linux container.
# Usage: ./deploy/aur/test-aur.sh
set -euo pipefail
cd "$(dirname "$0")/../.."
echo "=== Testing AUR PKGBUILD in Arch Linux container ==="
docker run --rm -v "$(pwd):/src:ro" archlinux:latest bash -c '
set -euo pipefail
# Install base-devel and rust
pacman -Syu --noconfirm base-devel rustup git
rustup default stable
# Create build user (makepkg refuses to run as root)
useradd -m builder
echo "builder ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Copy source and PKGBUILD
su builder -c "
mkdir -p /tmp/build && cd /tmp/build
cp /src/deploy/aur/PKGBUILD .
# Build the package
makepkg -si --noconfirm
# Verify
echo ''
echo '=== Installed ==='
btest --version
btest --help | head -5
echo ''
echo '=== Files ==='
pacman -Ql btest-rs
echo ''
echo '=== SUCCESS ==='
"
'

208
deploy/deb/build-deb.sh Executable file
View File

@@ -0,0 +1,208 @@
#!/usr/bin/env bash
# build-deb.sh -- Build a Debian/Ubuntu .deb package for btest-rs
#
# Usage:
# ./deploy/deb/build-deb.sh # uses dist/btest or target/release/btest
# BTEST_BIN=path/to/btest ./deploy/deb/build-deb.sh
#
# Requirements: dpkg-deb, gzip (standard on Debian/Ubuntu build hosts)
set -euo pipefail
###############################################################################
# Package metadata
###############################################################################
PKG_NAME="btest-rs"
PKG_VERSION="0.6.0"
PKG_ARCH="amd64"
PKG_MAINTAINER="Siavash Sameni <manwe@manko.yoga>"
PKG_DESCRIPTION="MikroTik Bandwidth Test (btest) server and client with EC-SRP5 auth"
PKG_HOMEPAGE="https://github.com/manawenuz/btest-rs"
PKG_LICENSE="MIT AND Apache-2.0"
PKG_SECTION="net"
PKG_PRIORITY="optional"
###############################################################################
# Paths
###############################################################################
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Locate the pre-built binary
if [[ -n "${BTEST_BIN:-}" ]]; then
: # caller provided an explicit path
elif [[ -f "$REPO_ROOT/dist/btest" ]]; then
BTEST_BIN="$REPO_ROOT/dist/btest"
elif [[ -f "$REPO_ROOT/target/release/btest" ]]; then
BTEST_BIN="$REPO_ROOT/target/release/btest"
else
echo "Error: cannot find btest binary."
echo " Build first (cargo build --release) or set BTEST_BIN=path/to/btest"
exit 1
fi
# Verify the binary exists and is executable
if [[ ! -f "$BTEST_BIN" ]]; then
echo "Error: $BTEST_BIN does not exist."
exit 1
fi
echo "==> Using binary: $BTEST_BIN"
###############################################################################
# Prepare staging tree
###############################################################################
DEB_FILE="${PKG_NAME}_${PKG_VERSION}_${PKG_ARCH}.deb"
STAGE="$(mktemp -d)"
trap 'rm -rf "$STAGE"' EXIT
echo "==> Staging in $STAGE"
# Binary
install -Dm755 "$BTEST_BIN" "$STAGE/usr/bin/btest"
# Man page
if [[ -f "$REPO_ROOT/docs/man/btest.1" ]]; then
install -Dm644 "$REPO_ROOT/docs/man/btest.1" "$STAGE/usr/share/man/man1/btest.1"
gzip -9n "$STAGE/usr/share/man/man1/btest.1"
else
echo "Warning: docs/man/btest.1 not found -- skipping man page"
fi
# systemd service unit
install -d "$STAGE/usr/lib/systemd/system"
cat > "$STAGE/usr/lib/systemd/system/btest.service" <<'UNIT'
[Unit]
Description=MikroTik Bandwidth Test Server (btest-rs)
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/btest -s
Restart=always
RestartSec=5
DynamicUser=yes
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
UNIT
# Documentation
install -Dm644 "$REPO_ROOT/README.md" "$STAGE/usr/share/doc/$PKG_NAME/README.md"
# License
install -Dm644 "$REPO_ROOT/LICENSE" "$STAGE/usr/share/licenses/$PKG_NAME/LICENSE"
# Debian copyright file (policy-compliant copy in /usr/share/doc)
install -d "$STAGE/usr/share/doc/$PKG_NAME"
cat > "$STAGE/usr/share/doc/$PKG_NAME/copyright" <<COPY
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: $PKG_NAME
Upstream-Contact: $PKG_MAINTAINER
Source: $PKG_HOMEPAGE
Files: *
Copyright: 2024-2026 Siavash Sameni
License: MIT AND Apache-2.0
COPY
###############################################################################
# Calculate installed size (in KiB, as Debian policy requires)
###############################################################################
INSTALLED_SIZE=$(du -sk "$STAGE" | cut -f1)
###############################################################################
# DEBIAN/control
###############################################################################
install -d "$STAGE/DEBIAN"
cat > "$STAGE/DEBIAN/control" <<CTRL
Package: $PKG_NAME
Version: $PKG_VERSION
Architecture: $PKG_ARCH
Maintainer: $PKG_MAINTAINER
Installed-Size: $INSTALLED_SIZE
Section: $PKG_SECTION
Priority: $PKG_PRIORITY
Homepage: $PKG_HOMEPAGE
Description: $PKG_DESCRIPTION
A high-performance Rust implementation of the MikroTik Bandwidth Test
protocol, supporting both server and client modes with EC-SRP5
authentication. Supports TCP/UDP throughput testing and is fully
compatible with RouterOS btest clients.
CTRL
###############################################################################
# DEBIAN/conffiles (mark the systemd unit as a conffile)
###############################################################################
cat > "$STAGE/DEBIAN/conffiles" <<'CF'
/usr/lib/systemd/system/btest.service
CF
###############################################################################
# Maintainer scripts
###############################################################################
# postinst -- reload systemd after install
cat > "$STAGE/DEBIAN/postinst" <<'POST'
#!/bin/sh
set -e
if [ "$1" = "configure" ]; then
if command -v systemctl >/dev/null 2>&1; then
systemctl daemon-reload || true
echo ""
echo "btest-rs installed. To start the server:"
echo " sudo systemctl enable --now btest.service"
echo ""
fi
fi
POST
chmod 755 "$STAGE/DEBIAN/postinst"
# prerm -- stop service before removal
cat > "$STAGE/DEBIAN/prerm" <<'PRERM'
#!/bin/sh
set -e
if [ "$1" = "remove" ] || [ "$1" = "deconfigure" ]; then
if command -v systemctl >/dev/null 2>&1; then
systemctl stop btest.service 2>/dev/null || true
systemctl disable btest.service 2>/dev/null || true
fi
fi
PRERM
chmod 755 "$STAGE/DEBIAN/prerm"
# postrm -- clean up after removal
cat > "$STAGE/DEBIAN/postrm" <<'POSTRM'
#!/bin/sh
set -e
if [ "$1" = "purge" ] || [ "$1" = "remove" ]; then
if command -v systemctl >/dev/null 2>&1; then
systemctl daemon-reload || true
fi
fi
POSTRM
chmod 755 "$STAGE/DEBIAN/postrm"
###############################################################################
# Build .deb
###############################################################################
OUTPUT_DIR="${OUTPUT_DIR:-$REPO_ROOT/dist}"
mkdir -p "$OUTPUT_DIR"
echo "==> Building $DEB_FILE ..."
dpkg-deb --root-owner-group --build "$STAGE" "$OUTPUT_DIR/$DEB_FILE"
echo "==> Package ready: $OUTPUT_DIR/$DEB_FILE"
echo ""
dpkg-deb --info "$OUTPUT_DIR/$DEB_FILE"
echo ""
dpkg-deb --contents "$OUTPUT_DIR/$DEB_FILE"

104
deploy/deb/test-deb.sh Executable file
View File

@@ -0,0 +1,104 @@
#!/usr/bin/env bash
# test-deb.sh -- Smoke-test a btest-rs .deb inside an Ubuntu Docker container
#
# Usage:
# ./deploy/deb/test-deb.sh # auto-finds dist/*.deb
# ./deploy/deb/test-deb.sh path/to/btest-rs_*.deb
#
# Requirements: docker
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
IMAGE="${TEST_IMAGE:-ubuntu:24.04}"
###############################################################################
# Locate the .deb
###############################################################################
if [[ -n "${1:-}" ]]; then
DEB_PATH="$1"
else
DEB_PATH="$(ls -1t "$REPO_ROOT"/dist/btest-rs_*.deb 2>/dev/null | head -1 || true)"
fi
if [[ -z "$DEB_PATH" || ! -f "$DEB_PATH" ]]; then
echo "Error: no .deb file found."
echo " Build first: ./deploy/deb/build-deb.sh"
echo " Or pass path: $0 path/to/btest-rs_*.deb"
exit 1
fi
DEB_FILE="$(basename "$DEB_PATH")"
DEB_DIR="$(cd "$(dirname "$DEB_PATH")" && pwd)"
echo "==> Testing $DEB_FILE in $IMAGE"
echo ""
###############################################################################
# Run tests inside a disposable container
###############################################################################
docker run --rm \
-v "$DEB_DIR/$DEB_FILE:/tmp/$DEB_FILE:ro" \
"$IMAGE" \
bash -euxc "
###################################################################
# 1. Install the .deb
###################################################################
apt-get update -qq
dpkg -i /tmp/$DEB_FILE || apt-get install -f -y # resolve deps if any
###################################################################
# 2. Verify files are in place
###################################################################
echo '--- Checking installed files ---'
test -x /usr/bin/btest
test -f /usr/lib/systemd/system/btest.service
test -f /usr/share/doc/btest-rs/README.md
test -f /usr/share/licenses/btest-rs/LICENSE
# Man page (may be gzipped)
test -f /usr/share/man/man1/btest.1.gz || test -f /usr/share/man/man1/btest.1
echo 'All expected files present.'
###################################################################
# 3. btest --version
###################################################################
echo ''
echo '--- btest --version ---'
btest --version
###################################################################
# 4. Quick loopback server+client test
###################################################################
echo ''
echo '--- Loopback smoke test ---'
# Start server in background
btest -s &
SERVER_PID=\$!
sleep 1
# Run a short TCP test against localhost
if btest -c 127.0.0.1 -d 2 2>&1; then
echo 'Loopback TCP test passed.'
else
echo 'Warning: loopback test returned non-zero (may be expected in container).'
fi
# Tear down
kill \$SERVER_PID 2>/dev/null || true
wait \$SERVER_PID 2>/dev/null || true
###################################################################
# 5. Package metadata sanity
###################################################################
echo ''
echo '--- dpkg metadata ---'
dpkg -s btest-rs | head -20
echo ''
echo '=== All tests passed ==='
"
echo ""
echo "==> .deb smoke test completed successfully."

57
deploy/openwrt/Makefile Normal file
View File

@@ -0,0 +1,57 @@
# OpenWrt package Makefile for btest-rs
#
# To build:
# 1. Clone the OpenWrt SDK for your target
# 2. Copy this directory to package/btest-rs/ in the SDK
# 3. Run: make package/btest-rs/compile V=s
#
# Or use the pre-built binary approach (see build-ipk.sh)
include $(TOPDIR)/rules.mk
PKG_NAME:=btest-rs
PKG_VERSION:=0.6.0
PKG_RELEASE:=1
PKG_SOURCE:=$(PKG_NAME)-$(PKG_VERSION).tar.gz
PKG_SOURCE_URL:=https://github.com/manawenuz/btest-rs/archive/refs/tags/v$(PKG_VERSION).tar.gz
PKG_HASH:=skip
PKG_BUILD_DEPENDS:=rust/host
PKG_BUILD_DIR:=$(BUILD_DIR)/$(PKG_NAME)-$(PKG_VERSION)
include $(INCLUDE_DIR)/package.mk
define Package/btest-rs
SECTION:=net
CATEGORY:=Network
TITLE:=MikroTik Bandwidth Test server and client
URL:=https://github.com/manawenuz/btest-rs
DEPENDS:=
PKGARCH:=$(ARCH)
endef
define Package/btest-rs/description
A Rust reimplementation of the MikroTik Bandwidth Test (btest) protocol.
Supports TCP/UDP, IPv4/IPv6, EC-SRP5 and MD5 authentication,
multi-connection, syslog, CSV output, and CPU monitoring.
endef
define Build/Compile
cd $(PKG_BUILD_DIR) && \
CARGO_TARGET_DIR=$(PKG_BUILD_DIR)/target \
cargo build --release --target $(RUSTC_TARGET)
endef
define Package/btest-rs/install
$(INSTALL_DIR) $(1)/usr/bin
$(INSTALL_BIN) $(PKG_BUILD_DIR)/target/$(RUSTC_TARGET)/release/btest $(1)/usr/bin/btest
$(INSTALL_DIR) $(1)/etc/init.d
$(INSTALL_BIN) ./files/btest.init $(1)/etc/init.d/btest
$(INSTALL_DIR) $(1)/etc/config
$(INSTALL_CONF) ./files/btest.config $(1)/etc/config/btest
endef
$(eval $(call BuildPackage,btest-rs))

117
deploy/openwrt/build-ipk.sh Executable file
View File

@@ -0,0 +1,117 @@
#!/usr/bin/env bash
# Build an OpenWrt .ipk package from a pre-built static binary.
# No OpenWrt SDK needed — just packages the binary with metadata.
#
# Usage:
# ./deploy/openwrt/build-ipk.sh <arch> [binary-path]
#
# Examples:
# ./deploy/openwrt/build-ipk.sh x86_64 dist/btest # from cross-compiled binary
# ./deploy/openwrt/build-ipk.sh aarch64 dist/btest # for RPi/ARM64 routers
# ./deploy/openwrt/build-ipk.sh mipsel target/release/btest # for MIPS little-endian
#
# Supported architectures: x86_64, aarch64, arm_cortex-a7, mipsel_24kc, mips_24kc
set -euo pipefail
cd "$(dirname "$0")/../.."
ARCH="${1:?Usage: $0 <arch> [binary-path]}"
BINARY="${2:-dist/btest}"
VERSION="0.6.0"
PKG_NAME="btest-rs"
OUTPUT_DIR="dist"
if [ ! -f "$BINARY" ]; then
echo "Error: binary not found at $BINARY"
echo "Build it first: cargo build --release --target <target>"
exit 1
fi
mkdir -p "$OUTPUT_DIR"
WORKDIR=$(mktemp -d)
trap "rm -rf $WORKDIR" EXIT
echo "=== Building ${PKG_NAME}_${VERSION}_${ARCH}.ipk ==="
# Create package structure
mkdir -p "$WORKDIR/data/usr/bin"
mkdir -p "$WORKDIR/data/etc/init.d"
mkdir -p "$WORKDIR/data/etc/config"
mkdir -p "$WORKDIR/control"
# Install files
cp "$BINARY" "$WORKDIR/data/usr/bin/btest"
chmod 755 "$WORKDIR/data/usr/bin/btest"
cp deploy/openwrt/files/btest.init "$WORKDIR/data/etc/init.d/btest"
chmod 755 "$WORKDIR/data/etc/init.d/btest"
cp deploy/openwrt/files/btest.config "$WORKDIR/data/etc/config/btest"
# Calculate installed size
INSTALLED_SIZE=$(du -sk "$WORKDIR/data" | awk '{print $1}')
# Control file
cat > "$WORKDIR/control/control" << EOF
Package: ${PKG_NAME}
Version: ${VERSION}-1
Depends: libc
Source: https://github.com/manawenuz/btest-rs
License: MIT AND Apache-2.0
Section: net
SourceName: ${PKG_NAME}
Maintainer: Siavash Sameni <manwe@manko.yoga>
Architecture: ${ARCH}
Installed-Size: ${INSTALLED_SIZE}
Description: MikroTik Bandwidth Test server and client
A Rust reimplementation of the MikroTik btest protocol.
Supports TCP/UDP, EC-SRP5 and MD5 auth, IPv4/IPv6.
EOF
# Post-install script
cat > "$WORKDIR/control/postinst" << 'EOF'
#!/bin/sh
[ "${IPKG_NO_SCRIPT}" = "1" ] && exit 0
/etc/init.d/btest enable 2>/dev/null || true
exit 0
EOF
chmod 755 "$WORKDIR/control/postinst"
# Pre-remove script
cat > "$WORKDIR/control/prerm" << 'EOF'
#!/bin/sh
/etc/init.d/btest stop 2>/dev/null || true
/etc/init.d/btest disable 2>/dev/null || true
exit 0
EOF
chmod 755 "$WORKDIR/control/prerm"
# Conffiles
cat > "$WORKDIR/control/conffiles" << EOF
/etc/config/btest
EOF
# Build the .ipk (it's just a tar.gz of tar.gz's)
cd "$WORKDIR"
# Create data.tar.gz
(cd data && tar czf ../data.tar.gz .)
# Create control.tar.gz
(cd control && tar czf ../control.tar.gz .)
# Create debian-binary
echo "2.0" > debian-binary
# Package it all
tar czf "${PKG_NAME}_${VERSION}-1_${ARCH}.ipk" debian-binary control.tar.gz data.tar.gz
cd -
cp "$WORKDIR/${PKG_NAME}_${VERSION}-1_${ARCH}.ipk" "$OUTPUT_DIR/"
echo ""
echo "Package: $OUTPUT_DIR/${PKG_NAME}_${VERSION}-1_${ARCH}.ipk"
ls -lh "$OUTPUT_DIR/${PKG_NAME}_${VERSION}-1_${ARCH}.ipk"
echo ""
echo "Install on OpenWrt:"
echo " scp $OUTPUT_DIR/${PKG_NAME}_${VERSION}-1_${ARCH}.ipk root@router:/tmp/"
echo " ssh root@router 'opkg install /tmp/${PKG_NAME}_${VERSION}-1_${ARCH}.ipk'"
echo " ssh root@router '/etc/init.d/btest enable && /etc/init.d/btest start'"

View File

@@ -0,0 +1,7 @@
config server
option enabled '0'
option port '2000'
option auth_user ''
option auth_pass ''
option ecsrp5 '0'
option syslog ''

34
deploy/openwrt/files/btest.init Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/sh /etc/rc.common
# btest-rs OpenWrt init script
START=90
STOP=10
USE_PROCD=1
start_service() {
local enabled port auth_user auth_pass ecsrp5 syslog
config_load btest
config_get_bool enabled server enabled 0
[ "$enabled" -eq 0 ] && return
config_get port server port 2000
config_get auth_user server auth_user ''
config_get auth_pass server auth_pass ''
config_get_bool ecsrp5 server ecsrp5 0
config_get syslog server syslog ''
procd_open_instance
procd_set_param command /usr/bin/btest -s -P "$port"
[ -n "$auth_user" ] && procd_append_param command -a "$auth_user"
[ -n "$auth_pass" ] && procd_append_param command -p "$auth_pass"
[ "$ecsrp5" -eq 1 ] && procd_append_param command --ecsrp5
[ -n "$syslog" ] && procd_append_param command --syslog "$syslog"
procd_set_param respawn
procd_set_param stdout 1
procd_set_param stderr 1
procd_close_instance
}

73
deploy/rpm/btest-rs.spec Normal file
View File

@@ -0,0 +1,73 @@
Name: btest-rs
Version: 0.6.0
Release: 1%{?dist}
Summary: MikroTik Bandwidth Test (btest) server and client with EC-SRP5 auth
License: MIT AND Apache-2.0
URL: https://github.com/manawenuz/btest-rs
Source0: https://github.com/manawenuz/btest-rs/archive/refs/tags/v%{version}.tar.gz
BuildRequires: cargo
BuildRequires: rust
ExclusiveArch: x86_64 aarch64
%description
A Rust reimplementation of the MikroTik Bandwidth Test (btest) protocol,
providing both server and client functionality with EC-SRP5 authentication.
%prep
%autosetup -n %{name}-%{version}
%build
export CARGO_TARGET_DIR=target
cargo build --release
%install
install -Dm755 target/release/btest %{buildroot}%{_bindir}/btest
install -Dm644 docs/man/btest.1 %{buildroot}%{_mandir}/man1/btest.1
install -Dm644 LICENSE %{buildroot}%{_datadir}/licenses/%{name}/LICENSE
# systemd service unit
install -d %{buildroot}%{_unitdir}
cat > %{buildroot}%{_unitdir}/btest.service << 'EOF'
[Unit]
Description=MikroTik Bandwidth Test Server (btest-rs)
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/btest -s
Restart=always
RestartSec=5
DynamicUser=yes
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
%files
%license LICENSE
%{_bindir}/btest
%{_mandir}/man1/btest.1*
%{_unitdir}/btest.service
%post
%systemd_post btest.service
%preun
%systemd_preun btest.service
%postun
%systemd_postun_with_restart btest.service
%changelog
* Mon Mar 30 2026 Siavash Sameni <manwe@manko.yoga> - 0.6.0-1
- Initial RPM package

30
deploy/rpm/build-rpm.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/usr/bin/env bash
# build-rpm.sh — Build the btest-rs RPM package
set -euo pipefail
SPEC_DIR="$(cd "$(dirname "$0")" && pwd)"
SPEC_FILE="${SPEC_DIR}/btest-rs.spec"
VERSION="0.6.0"
TARBALL="v${VERSION}.tar.gz"
SOURCE_URL="https://github.com/manawenuz/btest-rs/archive/refs/tags/${TARBALL}"
echo "==> Setting up rpmbuild tree"
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo "==> Downloading source tarball"
if [ ! -f ~/rpmbuild/SOURCES/"${TARBALL}" ]; then
curl -fSL -o ~/rpmbuild/SOURCES/"${TARBALL}" "${SOURCE_URL}"
else
echo " (already present, skipping download)"
fi
echo "==> Copying spec file"
cp "${SPEC_FILE}" ~/rpmbuild/SPECS/btest-rs.spec
echo "==> Building RPM"
rpmbuild -ba ~/rpmbuild/SPECS/btest-rs.spec
echo ""
echo "==> Build complete. Packages:"
find ~/rpmbuild/RPMS -name '*.rpm' -print
find ~/rpmbuild/SRPMS -name '*.rpm' -print

75
deploy/rpm/test-rpm.sh Executable file
View File

@@ -0,0 +1,75 @@
#!/usr/bin/env bash
# test-rpm.sh — Test the btest-rs RPM build inside a Fedora container
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
IMAGE="fedora:latest"
echo "==> Testing RPM build in ${IMAGE}"
docker run --rm \
-v "${REPO_ROOT}:/workspace:ro" \
"${IMAGE}" \
bash -euxc '
# ── Install build dependencies ──
dnf install -y rpm-build rpmdevtools curl gcc make \
systemd-rpm-macros
# Install Rust toolchain
curl --proto "=https" --tlsv1.2 -sSf https://sh.rustup.rs \
| sh -s -- -y --profile minimal
source "$HOME/.cargo/env"
# ── Set up rpmbuild tree ──
rpmdev-setuptree
VERSION="0.6.0"
TARBALL="v${VERSION}.tar.gz"
# Copy spec
cp /workspace/deploy/rpm/btest-rs.spec ~/rpmbuild/SPECS/
# Create source tarball from workspace
# rpmbuild expects btest-rs-VERSION/ top-level directory
mkdir -p /tmp/btest-rs-${VERSION}
cp -a /workspace/. /tmp/btest-rs-${VERSION}/
tar czf ~/rpmbuild/SOURCES/${TARBALL} -C /tmp btest-rs-${VERSION}
# ── Build RPM ──
rpmbuild -ba ~/rpmbuild/SPECS/btest-rs.spec
# ── Install the RPM ──
RPM=$(find ~/rpmbuild/RPMS -name "btest-rs-*.rpm" | head -1)
echo "Installing: ${RPM}"
dnf install -y "${RPM}"
# ── Verify installation ──
echo "--- btest --version ---"
btest --version
echo "--- Checking systemd unit ---"
systemctl cat btest.service || true
echo "--- Checking man page ---"
test -f /usr/share/man/man1/btest.1* && echo "man page OK" || echo "man page MISSING"
echo "--- Checking license ---"
test -f /usr/share/licenses/btest-rs/LICENSE && echo "license OK" || echo "license MISSING"
# ── Loopback bandwidth test ──
echo "--- Starting loopback test ---"
btest -s &
SERVER_PID=$!
sleep 2
btest -c 127.0.0.1 --duration 3 && echo "Loopback test PASSED" \
|| echo "Loopback test FAILED (exit $?)"
kill "${SERVER_PID}" 2>/dev/null || true
wait "${SERVER_PID}" 2>/dev/null || true
echo "==> All RPM tests completed."
'
echo "==> Fedora container test finished."

View File

@@ -0,0 +1,96 @@
# btest-rs syslog configuration for syslog-ng
# Add this to your syslog-ng.conf or include from conf.d/
#
# Copy to: /var/data/syslogng/config/conf.d/btest.conf
# Or append to your main syslog-ng.conf
#
# Note: uses message-based matching (not program()) because
# MikroTik sources use flags(no-parse) which skips program extraction.
# Filter for btest-rs messages
filter f_btest {
match("btest-rs:" value("MESSAGE"));
};
# Filter subcategories
filter f_btest_auth {
match("btest-rs:" value("MESSAGE")) and (
match("AUTH_SUCCESS" value("MESSAGE")) or
match("AUTH_FAILURE" value("MESSAGE"))
);
};
filter f_btest_test {
match("btest-rs:" value("MESSAGE")) and (
match("TEST_START" value("MESSAGE")) or
match("TEST_END" value("MESSAGE")) or
match("TEST_RESULT" value("MESSAGE"))
);
};
# All btest logs
destination d_btest_all {
file(
"/var/log/remote/btest/all.log"
create_dirs(yes)
dir_perm(0755)
perm(0644)
template(t_mikrotik_format)
);
};
# Auth events (successes + failures)
destination d_btest_auth {
file(
"/var/log/remote/btest/auth.log"
create_dirs(yes)
dir_perm(0755)
perm(0644)
template(t_mikrotik_format)
);
};
# Test events (start/stop/results)
destination d_btest_tests {
file(
"/var/log/remote/btest/tests.log"
create_dirs(yes)
dir_perm(0755)
perm(0644)
template(t_mikrotik_format)
);
};
# Per-day logs
destination d_btest_daily {
file(
"/var/log/remote/btest/${YEAR}-${MONTH}-${DAY}.log"
create_dirs(yes)
dir_perm(0755)
perm(0644)
template(t_mikrotik_format)
);
};
# Log paths
log {
source(s_network_udp);
source(s_network_tcp);
filter(f_btest);
destination(d_btest_all);
destination(d_btest_daily);
};
log {
source(s_network_udp);
source(s_network_tcp);
filter(f_btest_auth);
destination(d_btest_auth);
};
log {
source(s_network_udp);
source(s_network_tcp);
filter(f_btest_test);
destination(d_btest_tests);
};

View File

@@ -1,7 +1,7 @@
services:
btest-server:
build: .
image: git.manko.yoga/manawenuz/btest-rs:latest
image: ghcr.io/manawenuz/btest-rs:latest
container_name: btest-server
ports:
- "2000:2000/tcp"
@@ -13,7 +13,7 @@ services:
# Server with authentication enabled
btest-server-auth:
build: .
image: git.manko.yoga/manawenuz/btest-rs:latest
image: ghcr.io/manawenuz/btest-rs:latest
container_name: btest-server-auth
ports:
- "2010:2000/tcp"

View File

@@ -13,22 +13,31 @@ graph TB
client["client.rs<br/>Client mode"]
protocol["protocol.rs<br/>Wire protocol types"]
auth["auth.rs<br/>MD5 authentication"]
ecsrp5["ecsrp5.rs<br/>EC-SRP5 authentication<br/>(Curve25519 Weierstrass)"]
bandwidth["bandwidth.rs<br/>Rate control & reporting"]
csv_output["csv_output.rs<br/>CSV result logging"]
syslog["syslog_logger.rs<br/>Remote syslog (RFC 3164)"]
lib["lib.rs<br/>Public API for tests"]
main --> server
main --> client
main --> bandwidth
main --> csv_output
main --> syslog
server --> protocol
server --> auth
server --> ecsrp5
server --> bandwidth
server --> syslog
client --> protocol
client --> auth
client --> ecsrp5
client --> bandwidth
lib --> server
lib --> client
lib --> protocol
lib --> auth
lib --> ecsrp5
lib --> bandwidth
```
@@ -50,12 +59,20 @@ sequenceDiagram
alt No auth configured
SRV->>TCP: AUTH_OK [01 00 00 00]
else MD5 auth
else MD5 auth (RouterOS < 6.43)
SRV->>TCP: AUTH_REQUIRED [02 00 00 00]
SRV->>TCP: Challenge [16 random bytes]
MK->>TCP: Response [16 hash + 32 username]
Note over SRV: Verify MD5(pass + MD5(pass + challenge))
SRV->>TCP: AUTH_OK or AUTH_FAILED
else EC-SRP5 auth (RouterOS >= 6.43, --ecsrp5 flag)
SRV->>TCP: EC-SRP5 [03 00 00 00]
MK->>TCP: [len][username\0][client_pubkey:32][parity:1]
SRV->>TCP: [len][server_pubkey:32][parity:1][salt:16]
MK->>TCP: [len][client_confirmation:32]
SRV->>TCP: [len][server_confirmation:32]
Note over SRV: Curve25519 Weierstrass EC-SRP5<br/>See docs/ecsrp5-research.md
SRV->>TCP: AUTH_OK [01 00 00 00]
end
alt TCP mode
@@ -97,14 +114,18 @@ sequenceDiagram
CLI->>TCP: Command [16 bytes]
Note over CLI: direction bits tell server<br/>what to do (TX/RX/BOTH)
alt Auth response 01
alt Auth response 01 (no auth)
Note over CLI: No auth, proceed
else Auth response 02 (MD5)
MK->>TCP: Challenge
CLI->>TCP: MD5 response
MK->>TCP: Challenge [16 random bytes]
CLI->>TCP: MD5 response [48 bytes]
MK->>TCP: AUTH_OK
else Auth response 03 (EC-SRP5)
Note over CLI: Not supported yet
CLI->>TCP: [len][username\0][client_pubkey:32][parity:1]
MK->>TCP: [len][server_pubkey:32][parity:1][salt:16]
CLI->>TCP: [len][client_confirmation:32]
MK->>TCP: [len][server_confirmation:32]
MK->>TCP: AUTH_OK
end
Note over CLI,MK: Data transfer begins<br/>(TCP or UDP, same as server)
@@ -148,56 +169,115 @@ graph TB
## Key Design Decisions
### 1. Tokio async runtime
All I/O is async via tokio. Each client connection spawns independent tasks for TX, RX, and status exchange. This allows handling hundreds of concurrent connections on a single thread pool.
### 2. Lock-free shared state
TX/RX threads and the status loop share bandwidth counters via `AtomicU64`. No mutexes needed — `swap(0)` atomically reads and resets counters each interval.
TX/RX threads and the status loop share bandwidth counters via `AtomicU64`. No mutexes needed -- `swap(0)` atomically reads and resets counters each interval.
### 3. Sequential status loop (matching C pselect)
The UDP status exchange uses a sequential timeout-read-then-send pattern rather than `tokio::select!`. This ensures our status messages are sent exactly every 1 second, preventing MikroTik's speed adaptation from seeing irregular feedback.
### 4. Direction bits from server perspective
The direction byte in the protocol means what the **server** should do:
- `0x01` (CMD_DIR_RX) = server receives
- `0x02` (CMD_DIR_TX) = server transmits
- `0x03` (CMD_DIR_BOTH) = bidirectional
The client inverts before sending: client "transmit" `CMD_DIR_RX` (telling server to receive).
The client inverts before sending: client "transmit" sends `CMD_DIR_RX` (telling server to receive).
### 5. TCP socket half keepalive
When only one direction is active (e.g., TX only), the unused socket half is kept alive. Dropping `OwnedWriteHalf` sends a TCP FIN, which MikroTik interprets as disconnection.
### 6. Static musl binary
Release builds use musl for a fully static binary with zero runtime dependencies. The binary is 2 MB and runs on any Linux.
Release builds use musl for a fully static binary with zero runtime dependencies. The binary is approximately 2 MB and runs on any Linux distribution.
### 7. EC-SRP5 with big integer arithmetic
The EC-SRP5 implementation uses `num-bigint` for Curve25519 Weierstrass-form elliptic curve arithmetic. MikroTik's authentication uses the Weierstrass form (not the more common Montgomery or Edwards forms), requiring direct field arithmetic over the prime `2^255 - 19`. The implementation includes point multiplication, `lift_x`, `redp1` (hash-to-curve), and Montgomery coordinate conversion.
### 8. Global singletons for syslog and CSV
The syslog and CSV modules use `Mutex<Option<...>>` global statics. This avoids threading state through every function call while remaining safe. Both modules are initialized once at startup and used from any async task via their public API functions.
### 9. Shared BandwidthState for client duration timeout
When running with `--duration`, the tokio timeout cancels the client future. To preserve stats accumulated during the test, `BandwidthState` is created in `main()` and passed as an `Arc` into `run_client()`. The state survives cancellation because `main()` holds a reference. The `record_interval()` method accumulates totals that `summary()` returns.
### 10. IPv6 socket handling
IPv6 requires special handling on macOS:
- UDP sockets bind to `[::]` for IPv6 peers, `0.0.0.0` for IPv4
- Socket send/receive buffers set to 4MB via `socket2` before wrapping with tokio
- `SocketAddr::new()` used instead of string formatting (avoids `[addr]:port` parsing issues)
- Connected sockets preferred for single-connection (avoids ENOBUFS on `send_to()`)
- NDP probe packet sent before data blast to populate neighbor cache
- Adaptive backoff on ENOBUFS (200μs→10ms, resets on success)
### 11. CPU usage monitoring
A background OS thread samples system CPU every 1 second via:
- **macOS:** `host_statistics(HOST_CPU_LOAD_INFO)` — returns user/system/idle/nice ticks
- **Linux:** `/proc/stat` — reads aggregate CPU line
The percentage is stored in a global `AtomicU8` and included in every status message at byte 1 using MikroTik's encoding: `0x80 | percentage`. On receive, the remote CPU is decoded with `byte & 0x7F` and capped at 100%. Both local and remote CPU are displayed per interval and logged to CSV/syslog.
## File Layout
```
btest-rs/
├── src/
│ ├── main.rs # CLI entry point, argument parsing
│ ├── lib.rs # Public API (used by integration tests)
│ ├── protocol.rs # Wire format: Command, StatusMessage, constants
│ ├── auth.rs # MD5 challenge-response authentication
│ ├── server.rs # Server mode: listener, TCP/UDP handlers
│ ├── client.rs # Client mode: connector, TCP/UDP handlers
── bandwidth.rs # Rate limiting, formatting, shared state
│ ├── main.rs # CLI entry point, argument parsing (clap)
│ ├── lib.rs # Public API (used by integration tests)
│ ├── protocol.rs # Wire format: Command, StatusMessage, constants
│ ├── auth.rs # MD5 challenge-response authentication
│ ├── ecsrp5.rs # EC-SRP5 authentication (Curve25519 Weierstrass)
│ ├── server.rs # Server mode: listener, TCP/UDP handlers
── client.rs # Client mode: connector, TCP/UDP handlers
│ ├── bandwidth.rs # Rate limiting, formatting, shared state
│ ├── cpu.rs # CPU usage sampler (macOS + Linux)
│ ├── csv_output.rs # CSV result logging (append-mode, auto-header)
│ └── syslog_logger.rs # Remote syslog sender (RFC 3164 / BSD format)
├── tests/
│ └── integration_test.rs # End-to-end server/client tests
├── scripts/
│ ├── build-linux.sh # Cross-compile for x86_64 Linux
│ ├── install-service.sh # systemd service installer
│ ├── test-local.sh # Loopback self-test
│ ├── test-mikrotik.sh # Test against MikroTik device
── test-docker.sh # Docker container test
│ ├── build-linux.sh # Cross-compile for x86_64 Linux (musl)
│ ├── build-macos-release.sh # macOS release build
│ ├── install-service.sh # systemd service installer
│ ├── push-docker.sh # Push Docker image to registry
── test-local.sh # Loopback self-test
│ ├── test-mikrotik.sh # Test against MikroTik device
│ ├── test-docker.sh # Docker container test
│ └── debug-capture.sh # Packet capture for debugging
├── docs/
│ ├── architecture.md # This file
│ ├── protocol.md # Protocol specification
│ ├── user-guide.md # Usage documentation
── docker.md # Docker & deployment guide
├── Dockerfile # Production Docker image
├── Dockerfile.cross # Cross-compilation for Linux x86_64
├── docker-compose.yml # Docker Compose configuration
├── Cargo.toml
└── btest-opensource/ # Original C implementation (git submodule)
│ ├── architecture.md # This file
│ ├── protocol.md # Protocol specification
│ ├── user-guide.md # Usage documentation
── docker.md # Docker & deployment guide
│ ├── ecsrp5-research.md # EC-SRP5 reverse-engineering notes
│ └── man/
│ └── btest.1 # Unix manual page (troff format)
├── tests/
│ ├── integration_test.rs # Basic server/client handshake tests
│ ├── ecsrp5_test.rs # EC-SRP5 authentication tests
│ └── full_integration_test.rs # Comprehensive: all protocols, IPv4/6, CSV, syslog
├── deploy/
│ └── syslog-ng-btest.conf # syslog-ng configuration for btest events
├── proto-test/ # Python EC-SRP5 prototype (research branch)
│ ├── btest_ecsrp5_client.py # Working Python btest EC-SRP5 client
│ ├── btest_mitm.py # MITM proxy for protocol analysis
│ └── elliptic_curves.py # Curve25519 Weierstrass (MarginResearch)
├── KNOWN_ISSUES.md # Known bugs and platform limitations
├── Dockerfile # Production Docker image (multi-stage)
├── Dockerfile.cross # Cross-compilation for Linux x86_64
├── docker-compose.yml # Docker Compose configuration
├── Cargo.toml # Rust package manifest
├── Cargo.lock # Dependency lock file
├── LICENSE # MIT License
└── btest-opensource/ # Original C implementation (git submodule)
```

View File

@@ -1,26 +1,43 @@
# Docker & Deployment Guide
# Docker and Deployment Guide
## Container Registry
## Container Registries
Images are published to:
```
git.manko.yoga/manawenuz/btest-rs
git.manko.yoga/manawenuz/btest-rs # Gitea registry
ghcr.io/manawenuz/btest-rs # GitHub Container Registry
```
## Quick Run (Ephemeral)
## Quick Start
### Server (one-liner)
### Docker Compose (recommended)
```bash
# Server with no authentication
docker compose up -d
# Server with authentication
docker compose --profile auth up -d
# View logs
docker compose logs -f
```
### One-liner server
```bash
# Build and run server directly
docker build -t btest-rs . && \
docker run --rm -it \
-p 2000:2000/tcp \
-p 2001-2100:2001-2100/udp \
-p 2257-2356:2257-2356/udp \
btest-rs -s -v
```
# With authentication
### One-liner server with authentication
```bash
docker run --rm -it \
-p 2000:2000/tcp \
-p 2001-2100:2001-2100/udp \
@@ -28,7 +45,28 @@ docker run --rm -it \
btest-rs -s -a admin -p password -v
```
### Client (one-liner)
### Server with EC-SRP5 authentication
```bash
docker run --rm -it \
-p 2000:2000/tcp \
-p 2001-2100:2001-2100/udp \
-p 2257-2356:2257-2356/udp \
btest-rs -s -a admin -p password --ecsrp5 -v
```
### Server with syslog and CSV
```bash
docker run --rm -it \
-p 2000:2000/tcp \
-p 2001-2100:2001-2100/udp \
-p 2257-2356:2257-2356/udp \
-v /var/log/btest:/data \
btest-rs -s -a admin -p password --syslog 192.168.1.1:514 --csv /data/results.csv -v
```
### Client mode
```bash
# TCP download test against MikroTik
@@ -36,36 +74,50 @@ docker run --rm -it btest-rs -c 192.168.88.1 -r
# UDP bidirectional
docker run --rm -it btest-rs -c 192.168.88.1 -t -r -u
# Timed test with CSV output
docker run --rm -it \
-v $(pwd):/data \
btest-rs -c 192.168.88.1 -r -d 30 --csv /data/results.csv
# With authentication
docker run --rm -it btest-rs -c 192.168.88.1 -r -a admin -p password
```
### Using pre-built image from registry
```bash
# Pull from Gitea registry
docker pull git.manko.yoga/manawenuz/btest-rs:latest
docker pull ghcr.io/manawenuz/btest-rs:latest
# Run server
docker run --rm -it \
-p 2000:2000/tcp \
-p 2001-2100:2001-2100/udp \
-p 2257-2356:2257-2356/udp \
git.manko.yoga/manawenuz/btest-rs:latest -s -v
ghcr.io/manawenuz/btest-rs:latest -s -v
```
## Docker Compose
### Basic server
The `docker-compose.yml` file provides two service profiles:
### Default profile (no auth)
```bash
docker compose up -d
```
### Server with authentication
Starts a server on port 2000 with verbose logging and no authentication.
### Auth profile
```bash
docker compose --profile auth up -d
```
Starts an additional server on port 2010 with MD5 authentication (user: admin, password: password).
### docker-compose.yml
```yaml
@@ -94,7 +146,23 @@ services:
- auth
```
## Building
## Dockerfile
The production Dockerfile uses a multi-stage build:
1. **Build stage** -- Rust 1.86 slim image, compiles a release binary
2. **Runtime stage** -- Debian Bookworm slim, copies only the binary
The resulting image is approximately 80 MB. The binary itself is about 2 MB.
Exposed ports:
- `2000/tcp` -- control channel
- `2001-2100/udp` -- server-side data ports
- `2257-2356/udp` -- client-side data ports
Default entrypoint: `btest -s`
## Building Images
### Local build (native)
@@ -107,27 +175,26 @@ cargo build --release
```bash
scripts/build-linux.sh
# Binary at: dist/btest (static musl, 2 MB)
# Binary at: dist/btest (static musl, ~2 MB)
```
### Docker image build
```bash
# Production image (for running)
# Production image
docker build -t btest-rs .
# With custom tag
docker build -t git.manko.yoga/manawenuz/btest-rs:latest .
docker build -t git.manko.yoga/manawenuz/btest-rs:0.1.0 .
docker build -t git.manko.yoga/manawenuz/btest-rs:0.6.0 .
```
### Multi-platform build
```bash
# Build for both ARM64 and x86_64
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t git.manko.yoga/manawenuz/btest-rs:latest \
-t ghcr.io/manawenuz/btest-rs:latest \
--push .
```
@@ -142,14 +209,14 @@ docker build -t git.manko.yoga/manawenuz/btest-rs:latest .
docker push git.manko.yoga/manawenuz/btest-rs:latest
# Also tag with version
docker tag git.manko.yoga/manawenuz/btest-rs:latest \
git.manko.yoga/manawenuz/btest-rs:0.1.0
docker push git.manko.yoga/manawenuz/btest-rs:0.1.0
docker tag ghcr.io/manawenuz/btest-rs:latest \
git.manko.yoga/manawenuz/btest-rs:0.6.0
docker push git.manko.yoga/manawenuz/btest-rs:0.6.0
```
## Deployment on Linux Server
## Deployment Options
### Option 1: Docker
### Option 1: Docker (single container)
```bash
docker run -d --name btest-server \
@@ -157,8 +224,8 @@ docker run -d --name btest-server \
-p 2000:2000/tcp \
-p 2001-2100:2001-2100/udp \
-p 2257-2356:2257-2356/udp \
git.manko.yoga/manawenuz/btest-rs:latest \
-s -a admin -p password -v
ghcr.io/manawenuz/btest-rs:latest \
-s -a admin -p password --ecsrp5 -v
```
### Option 2: Static binary + systemd
@@ -167,11 +234,28 @@ docker run -d --name btest-server \
# Copy binary to server
scp dist/btest root@server:/usr/local/bin/btest
# Copy and run installer
# Run the installer
scp scripts/install-service.sh root@server:/tmp/
ssh root@server "bash /tmp/install-service.sh --auth-user admin --auth-pass password"
```
The installer script:
- Creates a dedicated `btest` system user
- Installs a hardened systemd unit with security options (NoNewPrivileges, ProtectSystem, PrivateTmp)
- Grants `CAP_NET_BIND_SERVICE` for binding to ports below 1024
- Enables and starts the service
- Supports `--auth-user`, `--auth-pass`, and `--port` options
Useful systemd commands after installation:
```bash
systemctl status btest # Check status
systemctl stop btest # Stop the service
systemctl restart btest # Restart
journalctl -u btest -f # Follow logs
systemctl disable btest # Disable autostart
```
### Option 3: Docker Compose on server
```bash
@@ -183,9 +267,9 @@ ssh root@server "cd /opt/btest-rs && docker compose up -d"
| Port | Protocol | Purpose |
|------|----------|---------|
| 2000 | TCP | Control channel (handshake, auth, status) |
| 2000 | TCP | Control channel (handshake, auth, status exchange) |
| 2001-2100 | UDP | Server-side data ports |
| 2257-2356 | UDP | Client-side data ports (2001+256) |
| 2257-2356 | UDP | Client-side data ports (server_port + 256) |
### Firewall rules (iptables)
@@ -203,20 +287,35 @@ ufw allow 2001:2100/udp
ufw allow 2257:2356/udp
```
### Firewall rules (nftables)
```bash
nft add rule inet filter input tcp dport 2000 accept
nft add rule inet filter input udp dport 2001-2100 accept
nft add rule inet filter input udp dport 2257-2356 accept
```
## Health Check
```bash
# Check if server is responding
# Check if server is responding (TCP handshake)
nc -zv <server-ip> 2000
# Check Docker container
# Check Docker container status
docker logs btest-server
docker exec btest-server ps aux
docker ps --filter name=btest-server
# Check systemd service
systemctl status btest
journalctl -u btest --since "5 minutes ago"
```
## Resource Usage
- **Memory**: ~5 MB base, +1 MB per active connection
- **CPU**: Minimal when idle, scales with bandwidth
- **Binary size**: 2 MB (static musl build)
- **Docker image**: ~80 MB (Debian slim + binary)
| Resource | Value |
|----------|-------|
| Memory (idle) | ~5 MB |
| Memory (per active connection) | +1 MB |
| CPU | Minimal when idle, scales with bandwidth |
| Binary size | ~2 MB (static musl build) |
| Docker image | ~80 MB (Debian slim + binary) |

238
docs/ecsrp5-research.md Normal file
View File

@@ -0,0 +1,238 @@
# EC-SRP5 Authentication Research
## Summary
MikroTik RouterOS >= 6.43 uses EC-SRP5 (Elliptic Curve Secure Remote Password) for authentication. When the btest server has auth enabled, it responds with `03 00 00 00` instead of `02 00 00 00` (legacy MD5).
**Status: Fully reverse-engineered and verified.** Python prototype authenticates successfully against MikroTik RouterOS 7.x btest server.
## Discovery Process
### Step 1: Initial Capture
Connected our client to MikroTik btest server with auth enabled. Server responded with `03 00 00 00` and waited for the client to initiate.
### Step 2: Winbox EC-SRP5 Verification
Tested the EC-SRP5 crypto implementation (from [MarginResearch/mikrotik_authentication](https://github.com/MarginResearch/mikrotik_authentication)) against MikroTik's Winbox port (8291). **Authentication succeeded**, confirming the elliptic curve math is correct.
### Step 3: Framing Discovery via MITM
The Winbox `[len][0x06][payload]` framing was rejected by the btest port. To discover the correct framing, we built a MITM proxy (`proto-test/btest_mitm.py`) and routed a MikroTik client through it to the MikroTik server.
**Finding: btest uses `[len][payload]` — identical to Winbox but without the `0x06` handler byte.**
### Step 4: Successful Authentication
Updated the Python prototype to use `[len][payload]` framing. EC-SRP5 authentication against MikroTik's btest server succeeded and data transfer began.
## Protocol Specification
### Auth Trigger
After the standard btest handshake (HELLO + Command), the server responds:
```
01 00 00 00 → No auth required
02 00 00 00 → MD5 challenge-response (RouterOS < 6.43)
03 00 00 00 → EC-SRP5 (RouterOS >= 6.43)
```
### EC-SRP5 Handshake (4 messages after `03 00 00 00`)
```mermaid
sequenceDiagram
participant C as Client
participant S as Server
Note over S: Server sent 03 00 00 00
C->>S: MSG1: [len][username\0][client_pubkey:32][parity:1]
Note over C: len = 1 byte, total = len + 1 bytes
S->>C: MSG2: [len][server_pubkey:32][parity:1][salt:16]
Note over S: len = 49 (0x31)
C->>S: MSG3: [len][client_confirmation:32]
Note over C: len = 32 (0x20)
S->>C: MSG4: [len][server_confirmation:32]
Note over S: len = 32 (0x20)
Note over S: Then continues with normal btest flow:
S->>C: AUTH_OK [01 00 00 00]
S->>C: UDP port [2 bytes BE] (if UDP mode)
```
### Framing Comparison
| Protocol | Message framing |
|----------|----------------|
| Winbox (port 8291) | `[len:1][0x06][payload]` |
| **btest (port 2000)** | **`[len:1][payload]`** |
| MAC Telnet (UDP 20561) | Control packets with magic bytes |
The `0x06` handler byte in Winbox identifies the message as an auth message. Btest omits it since the auth context is implicit after `03 00 00 00`.
### Captured Exchange (from MITM)
```
CLIENT → SERVER (40 bytes):
27 61 6e 74 61 72 00 38 8a 37 36 52 6a 32 e9 87 'antar.8.76Rj2..
4e 92 f8 c3 aa a1 18 da cd 71 b6 ab 76 fd 72 aa N........q..v.r.
c3 f6 6a 43 9b c8 a1 01 ..jC....
Decoded:
len=0x27 (39 bytes payload)
username="antar\0"
pubkey=388a373652...c8a1 (32 bytes)
parity=0x01
SERVER → CLIENT (50 bytes):
31 6c c9 e3 1a 79 43 4a 40 51 de fd 55 cc 8d 6d 1l...yCJ@Q..U..m
3c ec cd 73 19 1f a6 83 15 94 62 52 97 fe 5d 89 <..s......bR..].
1a 00 3c ec 65 b8 34 28 0a 16 c5 48 0d 7b 50 00 ..<.e.4(...H.{P.
e3 80 ..
Decoded:
len=0x31 (49 bytes payload)
server_pubkey=6cc9e31a...5d891a (32 bytes)
parity=0x00
salt=3cec65b834280a16c5480d7b5000e380 (16 bytes)
CLIENT → SERVER (33 bytes):
20 9b 1f 74 ec 40 31 2c ...
Decoded:
len=0x20 (32 bytes payload)
client_cc=9b1f74ec... (32 bytes, SHA256 proof)
SERVER → CLIENT (33 bytes):
20 7d 59 b3 2e 28 6e 52 ...
Decoded:
len=0x20 (32 bytes payload)
server_cc=7d59b32e... (32 bytes, SHA256 proof)
POST-AUTH:
01 00 00 00 07 fa
Decoded:
AUTH_OK=01000000
UDP_port=0x07fa (2042)
```
## Cryptographic Details
### Elliptic Curve: Curve25519 in Weierstrass Form
```
p = 2^255 - 19
r = curve order (same as Ed25519)
Montgomery A = 486662
Weierstrass conversion:
a = 0x2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa984914a144
b = 0x7b425ed097b425ed097b425ed097b425ed097b425ed097b4260b5e9c7710c864
Generator: lift_x(9) in Montgomery, converted to Weierstrass
Cofactor: 8
```
All EC math is in Weierstrass form. Public keys are transmitted as Montgomery x-coordinates (32 bytes big-endian) plus a 1-byte y-parity flag.
### Key Derivation
```
inner = SHA256(username + ":" + password)
validator_priv (i) = SHA256(salt || inner)
validator_pub (x_gamma) = i * G
```
### Shared Secret Computation
**Client side (ECPESVDP-SRP-A):**
```
v = redp1(x_gamma, parity=1) # hash-to-curve of validator pubkey
w_b = lift_x(server_pubkey) + v # undo verifier blinding
j = SHA256(client_pubkey || server_pubkey)
scalar = (i * j + s_a) mod r # combined scalar
Z = scalar * w_b # shared secret point
z = to_montgomery(Z).x # Montgomery x-coordinate
```
**Server side (ECPESVDP-SRP-B):**
```
gamma = redp1(x_gamma, parity=0)
w_a = lift_x(client_pubkey)
Z = s_b * (w_a + j * gamma) # where j = SHA256(x_w_a || x_w_b)
z = to_montgomery(Z).x
```
### Confirmation Codes
```
client_cc = SHA256(j || z)
server_cc = SHA256(j || client_cc || z)
```
Both sides verify the peer's confirmation code to ensure the shared secret matches.
### redp1 (Hash-to-Curve)
```python
def redp1(x_bytes, parity):
x = SHA256(x_bytes)
while True:
x2 = SHA256(x)
point = lift_x(int(x2), parity)
if point is valid:
return point
x = (int(x) + 1).to_bytes(32)
```
## Implementation Plan for Rust
### Required Crates
| Crate | Purpose |
|-------|---------|
| `num-bigint` + `num-traits` | Big integer arithmetic for field operations |
| `sha2` | SHA-256 |
| `ecdsa` or custom | Curve25519 Weierstrass point operations |
**Note:** `curve25519-dalek` operates in Montgomery/Edwards form, not Weierstrass. We need Weierstrass arithmetic for compatibility with MikroTik's implementation. Options:
1. Use `num-bigint` for direct field arithmetic (like the Python `ecdsa` library)
2. Use the `p256` crate's infrastructure with custom curve parameters
3. Port the Python `WCurve` class directly using big integers
### Implementation Steps
1. **Port `WCurve`** — Weierstrass curve with Curve25519 parameters, point multiplication, `lift_x`, `redp1`, Montgomery conversion
2. **Port EC-SRP5 client** — generate keypair, compute shared secret, confirmation codes
3. **Port EC-SRP5 server** — verify client proof, generate server proof (for our server mode)
4. **Integrate into `auth.rs`** — handle `03 00 00 00` response with btest-specific `[len][payload]` framing
5. **Server registration** — derive salt + validator from username/password for server-side verification
### Server-Side Specifics
When our server receives a client with EC-SRP5 support, we need to:
1. Store `salt` and `x_gamma` (validator public key) per user — derived from username + password at startup
2. Generate ephemeral server keypair
3. Compute password-entangled public key: `W_b = s_b * G + redp1(x_gamma, 0)`
4. Verify client's confirmation code
5. Send server confirmation code
## Files
| File | Purpose |
|------|---------|
| `proto-test/elliptic_curves.py` | Curve25519 Weierstrass implementation |
| `proto-test/btest_ecsrp5_client.py` | Working Python btest EC-SRP5 client |
| `proto-test/btest_mitm.py` | MITM proxy for protocol analysis |
## Credits
- **[MarginResearch](https://github.com/MarginResearch/mikrotik_authentication)** — Reverse-engineered MikroTik's EC-SRP5 for Winbox/MAC Telnet
- **[Margin Research blog](https://margin.re/2022/02/mikrotik-authentication-revealed/)** — Detailed write-up of MikroTik authentication
- **btest framing discovery** — MITM analysis showing btest uses `[len][payload]` (no `0x06` handler byte)

365
docs/man/btest.1 Normal file
View File

@@ -0,0 +1,365 @@
.\" btest-rs manual page
.\" Generated for btest-rs v0.6.0
.TH BTEST 1 "2026-03-31" "btest-rs 0.5.0" "User Commands"
.SH NAME
btest \- MikroTik Bandwidth Test server and client
.SH SYNOPSIS
.B btest
.B \-s
.RI [ OPTIONS ]
.br
.B btest
.B \-c
.I HOST
.RB { \-t | \-r }
.RI [ OPTIONS ]
.SH DESCRIPTION
.B btest
is a Rust reimplementation of the MikroTik Bandwidth Test (btest) protocol.
It can operate as a server (accepting connections from MikroTik RouterOS
devices or other btest clients) or as a client (connecting to a MikroTik
device's built-in bandwidth test server).
.PP
The server listens on TCP port 2000 by default. MikroTik devices connect
to this port for handshake, authentication, and status exchange. UDP data
transfer uses ports 2001 and above.
.PP
Both MD5 challenge-response (RouterOS < 6.43) and EC-SRP5 Curve25519
(RouterOS >= 6.43) authentication are supported.
.SH OPTIONS
.SS "Mode Selection"
.TP
.BR \-s ", " \-\-server
Run in server mode. Listen for incoming connections from MikroTik devices
or other btest clients. Conflicts with
.BR \-c .
.TP
.BI \-c " HOST" "\fR, \fP" \-\-client " HOST"
Run in client mode, connecting to the specified
.IR HOST .
The host can be an IPv4 address, IPv6 address, or hostname. Conflicts with
.BR \-s .
.SS "Test Direction (client mode)"
.TP
.BR \-t ", " \-\-transmit
Client transmits data to the server (upload test). Can be combined with
.B \-r
for bidirectional testing.
.TP
.BR \-r ", " \-\-receive
Client receives data from the server (download test). Can be combined with
.B \-t
for bidirectional testing.
.SS "Protocol and Transfer"
.TP
.BR \-u ", " \-\-udp
Use UDP instead of TCP for data transfer. UDP uses separate data ports
(2001+ server side, 2257+ client side) and exchanges status messages
over the TCP control channel every second.
.TP
.BI \-b " BW" "\fR, \fP" \-\-bandwidth " BW"
Target bandwidth limit for the test. Accepts suffixes:
.B K
(kilobits/sec),
.B M
(megabits/sec),
.B G
(gigabits/sec). Examples:
.BR 100M ", " 1G ", " 500K .
Default is 0 (unlimited).
.TP
.BI \-P " PORT" "\fR, \fP" \-\-port " PORT"
TCP port to listen on in server mode or connect to in client mode.
Default: 2000.
.SS "Network Binding (server mode)"
.TP
.BI \-\-listen " ADDR"
IPv4 address to bind the server listener to. Use
.B none
to disable IPv4 listening entirely (useful with
.B \-\-listen6
for IPv6-only mode). Default: 0.0.0.0.
.TP
.BI \-\-listen6 " \fR[\fPADDR\fR]\fP"
Enable the IPv6 listener. If no address is given, binds to
.BR :: .
Experimental: TCP over IPv6 works fully on all platforms. UDP over IPv6
has issues on macOS due to kernel ENOBUFS limitations. On Linux, IPv6 UDP
works correctly.
.SS "Authentication"
.TP
.BI \-a " USER" "\fR, \fP" \-\-authuser " USER"
Authentication username. In server mode, connecting clients must provide
this username. In client mode, this username is sent to the server.
.TP
.BI \-p " PASS" "\fR, \fP" \-\-authpass " PASS"
Authentication password. In server mode, connecting clients must provide
a matching password. In client mode, this password is used to authenticate
with the server.
.TP
.B \-\-ecsrp5
Use EC-SRP5 authentication (Curve25519 Weierstrass). In server mode, this
causes the server to advertise EC-SRP5 instead of MD5 to connecting clients.
Required for RouterOS >= 6.43 devices. In client mode, the authentication
type is auto-detected from the server's response and this flag is not needed.
.SS "Test Control"
.TP
.BI \-d " SECS" "\fR, \fP" \-\-duration " SECS"
Test duration in seconds (client mode only). The client exits cleanly after
the specified number of seconds. A value of 0 means unlimited (run until
interrupted with Ctrl-C). Default: 0.
.TP
.BR \-n ", " \-\-nat
NAT traversal mode. Sends an empty UDP probe packet to the server before
starting the receive thread, opening a hole in NAT firewalls. Only relevant
for UDP receive tests when the client is behind NAT.
.SS "Logging and Output"
.TP
.BI \-\-csv " FILE"
Output test results to a CSV file. Appends a row for each completed test.
Creates the file with a header row if it does not exist. Columns:
timestamp, host, port, protocol, direction, duration_s, tx_avg_mbps,
rx_avg_mbps, tx_bytes, rx_bytes, lost_packets, auth_type.
.TP
.BR \-q ", " \-\-quiet
Suppress per-second bandwidth output to the terminal. Useful in combination
with
.B \-\-csv
for machine-readable-only output, or when running as a background service.
.TP
.BI \-\-syslog " HOST:PORT"
Send structured log events to a remote syslog server via UDP. Uses RFC 3164
(BSD syslog) format with facility local0. Events include AUTH_SUCCESS,
AUTH_FAILURE, TEST_START, and TEST_END with detailed metadata.
Example:
.BR \-\-syslog\ 192.168.1.1:514 .
.TP
.BR \-v ", " \-\-verbose
Increase log verbosity. Can be repeated for more detail:
.RS
.TP
.B \-v
Debug messages (connection lifecycle, authentication steps).
.TP
.B \-vv
Trace messages (hex dumps of protocol exchange).
.TP
.B \-vvv
Maximum verbosity.
.RE
.TP
.BR \-h ", " \-\-help
Print help information and exit.
.TP
.BR \-V ", " \-\-version
Print version information and exit.
.SH EXAMPLES
.SS "Server Mode"
Start a basic server with no authentication:
.PP
.RS
.nf
btest -s
.fi
.RE
.PP
Server with MD5 authentication:
.PP
.RS
.nf
btest -s -a admin -p password
.fi
.RE
.PP
Server with EC-SRP5 authentication (RouterOS >= 6.43):
.PP
.RS
.nf
btest -s -a admin -p password --ecsrp5
.fi
.RE
.PP
Server with syslog and CSV logging:
.PP
.RS
.nf
btest -s -a admin -p password --syslog 10.0.0.1:514 --csv /var/log/btest.csv
.fi
.RE
.PP
Server listening on IPv4 and IPv6:
.PP
.RS
.nf
btest -s --listen6
.fi
.RE
.PP
Server on a custom port with debug output:
.PP
.RS
.nf
btest -s -P 3000 -v
.fi
.RE
.SS "Client Mode"
TCP download test:
.PP
.RS
.nf
btest -c 192.168.88.1 -r
.fi
.RE
.PP
TCP upload test:
.PP
.RS
.nf
btest -c 192.168.88.1 -t
.fi
.RE
.PP
Bidirectional TCP test:
.PP
.RS
.nf
btest -c 192.168.88.1 -t -r
.fi
.RE
.PP
UDP download test:
.PP
.RS
.nf
btest -c 192.168.88.1 -r -u
.fi
.RE
.PP
UDP bidirectional with bandwidth limit:
.PP
.RS
.nf
btest -c 192.168.88.1 -t -r -u -b 100M
.fi
.RE
.PP
Timed test (30 seconds) with CSV output:
.PP
.RS
.nf
btest -c 192.168.88.1 -r -d 30 --csv results.csv
.fi
.RE
.PP
Quiet mode with CSV only:
.PP
.RS
.nf
btest -c 192.168.88.1 -r -d 60 --csv results.csv -q
.fi
.RE
.PP
With authentication:
.PP
.RS
.nf
btest -c 192.168.88.1 -r -a admin -p password
.fi
.RE
.PP
UDP receive through NAT:
.PP
.RS
.nf
btest -c 192.168.88.1 -r -u -n
.fi
.RE
.SH PORTS
.TP
.B 2000/tcp
Control channel. Used for handshake, authentication, and status exchange.
.TP
.B 2001-2100/udp
Server-side UDP data ports. Each connection uses the next available port
starting from 2001.
.TP
.B 2257-2356/udp
Client-side UDP data ports. Offset from server port by 256.
.SH EXIT STATUS
.TP
.B 0
Success. The test completed normally or the duration expired.
.TP
.B 1
Error. Failed to connect, authentication failed, or invalid arguments.
.SH ENVIRONMENT
.TP
.B RUST_LOG
Override the log filter. When set, takes precedence over the
.B \-v
flag. Example:
.BR RUST_LOG=trace .
.SH FILES
.TP
.I /usr/local/bin/btest
Default installation path for the binary.
.TP
.I /etc/systemd/system/btest.service
systemd unit file created by the install-service.sh script.
.SH AUTHENTICATION
.B btest
supports two authentication schemes:
.TP
.B MD5 (legacy)
Double MD5 challenge-response. Compatible with RouterOS versions before 6.43.
The server sends a 16-byte random challenge. The client responds with
MD5(password + MD5(password + challenge)) and the username.
.TP
.B EC-SRP5 (modern)
Elliptic Curve Secure Remote Password using Curve25519 in Weierstrass form.
Used by RouterOS >= 6.43. Provides zero-knowledge password proof. Enable on
the server with
.BR \-\-ecsrp5 .
Clients auto-detect the authentication type.
.SH MIKROTIK CONFIGURATION
Enable the bandwidth test server on MikroTik for client mode:
.PP
.RS
.nf
/tool/bandwidth-server set enabled=yes
.fi
.RE
.PP
Run a test from MikroTik connecting to a btest-rs server:
.PP
.RS
.nf
/tool/bandwidth-test address=<server-ip> direction=both \\
protocol=udp user=admin password=password
.fi
.RE
.SH SEE ALSO
.BR iperf3 (1),
.BR netperf (1)
.PP
Project documentation:
.I https://github.com/samm-git/btest-opensource
.SH CREDITS
.B btest-opensource
by Alex Samorukov \(em original C implementation and protocol
reverse-engineering (MIT License).
.PP
.B Margin Research
\(em EC-SRP5 authentication reverse-engineering for MikroTik RouterOS
(Apache License 2.0).
.PP
.B MikroTik
\(em creator of the bandwidth test protocol and RouterOS.
.SH LICENSE
MIT License. See the LICENSE file in the source distribution.
.PP
This project is derived from btest-opensource (MIT License, Copyright 2016
Alex Samorukov). The EC-SRP5 implementation is based on research by Margin
Research (Apache License 2.0).
.SH AUTHORS
btest-rs contributors.

View File

@@ -1,6 +1,6 @@
# MikroTik Bandwidth Test Protocol Specification
This document describes the MikroTik btest wire protocol as reverse-engineered from RouterOS traffic captures. Based on the work of [Alex Samorukov](https://github.com/samm-git/btest-opensource).
This document describes the MikroTik btest wire protocol as reverse-engineered from RouterOS traffic captures. Based on the work of [Alex Samorukov](https://github.com/samm-git/btest-opensource) and [Margin Research](https://github.com/MarginResearch/mikrotik_authentication).
## Connection Setup
@@ -24,7 +24,11 @@ sequenceDiagram
S->>C: OK [01 00 00 00] or FAILED [00 00 00 00]
else EC-SRP5 authentication (RouterOS >= 6.43)
S->>C: EC_SRP5 [03 00 00 00]
Note over C,S: Not yet implemented
C->>S: MSG1 [len][username\0][client_pubkey:32][parity:1]
S->>C: MSG2 [len][server_pubkey:32][parity:1][salt:16]
C->>S: MSG3 [len][client_confirmation:32]
S->>C: MSG4 [len][server_confirmation:32]
S->>C: OK [01 00 00 00]
end
Note over C,S: Data transfer begins
@@ -32,11 +36,11 @@ sequenceDiagram
## Command Structure (16 bytes)
Sent by client after receiving HELLO.
Sent by the client after receiving HELLO.
```
Offset Size Type Field Description
────── ──── ──── ───── ───────────
------ ---- ---- ----- -----------
0 1 uint8 protocol 0x00=UDP, 0x01=TCP
1 1 uint8 direction Bit flags (server perspective)
2 1 uint8 random_data 0x00=random, 0x01=zeros
@@ -58,8 +62,8 @@ Direction bits describe what the **server** should do:
| 0x03 | DIR_BOTH | Both directions | Both directions |
**Important**: The client inverts when constructing the command:
- Client selects "transmit" sends `0x01` (server should receive)
- Client selects "receive" sends `0x02` (server should transmit)
- Client selects "transmit" -> sends `0x01` (server should receive)
- Client selects "receive" -> sends `0x02` (server should transmit)
### Default TX Sizes
@@ -124,6 +128,184 @@ Challenge: ad32d6f94d28161625f2f390bb895637 (hex)
Expected: 3c968565bc0314f281a6da1571cf7255 (hex)
```
## EC-SRP5 Authentication
EC-SRP5 (Elliptic Curve Secure Remote Password) is used by RouterOS >= 6.43. It provides zero-knowledge password proof using Curve25519 in Weierstrass form.
### Auth Trigger
After the standard btest handshake (HELLO + Command), the server responds with one of:
```
01 00 00 00 -> No auth required
02 00 00 00 -> MD5 challenge-response (RouterOS < 6.43)
03 00 00 00 -> EC-SRP5 (RouterOS >= 6.43)
```
### Message Framing
Unlike Winbox (port 8291) which uses `[len:1][0x06][payload]`, the btest protocol uses a simpler framing:
```
[len:1][payload]
```
The `0x06` handler byte is omitted because the authentication context is implicit after receiving `03 00 00 00`.
| Protocol | Message framing |
|----------|----------------|
| Winbox (port 8291) | `[len:1][0x06][payload]` |
| **btest (port 2000)** | **`[len:1][payload]`** |
### EC-SRP5 Handshake (4 messages)
```mermaid
sequenceDiagram
participant C as Client
participant S as Server
Note over S: Server sent 03 00 00 00
C->>S: MSG1: [len][username\0][client_pubkey:32][parity:1]
Note over C: len = username_len + 1 + 32 + 1
S->>C: MSG2: [len][server_pubkey:32][parity:1][salt:16]
Note over S: len = 49 (0x31)
C->>S: MSG3: [len][client_confirmation:32]
Note over C: len = 32 (0x20)
S->>C: MSG4: [len][server_confirmation:32]
Note over S: len = 32 (0x20)
Note over S: Then continues with normal btest flow:
S->>C: AUTH_OK [01 00 00 00]
S->>C: UDP port [2 bytes BE] (if UDP mode)
```
### Elliptic Curve: Curve25519 in Weierstrass Form
MikroTik's EC-SRP5 uses Curve25519 parameters but operates entirely in Weierstrass form, not the more common Montgomery or Edwards representations.
```
Prime field: p = 2^255 - 19
Curve order: r = 2^252 + 27742317777372353535851937790883648493
Montgomery A: 486662
Weierstrass parameters (converted from Montgomery):
a = 0x2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa984914a144
b = 0x7b425ed097b425ed097b425ed097b425ed097b425ed097b4260b5e9c7710c864
Generator: lift_x(9) in Montgomery, converted to Weierstrass
Cofactor: 8
```
Public keys are transmitted as Montgomery x-coordinates (32 bytes big-endian) plus a 1-byte y-parity flag.
### Key Derivation
```
inner = SHA256(username + ":" + password)
salt = 16 random bytes (generated by server)
validator_priv (i) = SHA256(salt || inner)
validator_pub (x_gamma) = i * G
```
The server stores `salt` and `x_gamma` (the validator public key) for each user. In btest-rs, these are derived from the username and password at startup.
### Shared Secret Computation
**Client side (ECPESVDP-SRP-A):**
```
v = redp1(x_gamma, parity=1) # hash-to-curve of validator pubkey
w_b = lift_x(server_pubkey) + v # undo verifier blinding
j = SHA256(client_pubkey || server_pubkey)
scalar = (i * j + client_secret) mod r # combined scalar
Z = scalar * w_b # shared secret point
z = to_montgomery(Z).x # Montgomery x-coordinate
```
**Server side (ECPESVDP-SRP-B):**
```
gamma = redp1(x_gamma, parity=0)
w_a = lift_x(client_pubkey)
j = SHA256(client_pubkey || server_pubkey)
Z = server_secret * (w_a + j * gamma) # shared secret point
z = to_montgomery(Z).x
```
### Confirmation Codes
```
client_cc = SHA256(j || z)
server_cc = SHA256(j || client_cc || z)
```
Both sides verify the peer's confirmation code to ensure the shared secret matches. If either code is wrong, authentication fails.
### redp1 (Hash-to-Curve)
```
def redp1(x_bytes, parity):
x = SHA256(x_bytes)
while True:
x2 = SHA256(x)
point = lift_x(int(x2), parity)
if point is valid:
return point
x = (int(x) + 1).to_bytes(32)
```
This deterministically maps a byte string to a valid curve point by repeatedly hashing until a valid x-coordinate is found.
### Captured Exchange (from MITM analysis)
```
CLIENT -> SERVER (40 bytes):
27 61 6e 74 61 72 00 38 8a 37 36 52 6a 32 e9 87
4e 92 f8 c3 aa a1 18 da cd 71 b6 ab 76 fd 72 aa
c3 f6 6a 43 9b c8 a1 01
Decoded:
len=0x27 (39 bytes payload)
username="antar\0"
pubkey=388a373652...c8a1 (32 bytes)
parity=0x01
SERVER -> CLIENT (50 bytes):
31 6c c9 e3 1a 79 43 4a 40 51 de fd 55 cc 8d 6d
3c ec cd 73 19 1f a6 83 15 94 62 52 97 fe 5d 89
1a 00 3c ec 65 b8 34 28 0a 16 c5 48 0d 7b 50 00
e3 80
Decoded:
len=0x31 (49 bytes payload)
server_pubkey=6cc9e31a...5d891a (32 bytes)
parity=0x00
salt=3cec65b834280a16c5480d7b5000e380 (16 bytes)
CLIENT -> SERVER (33 bytes):
20 9b 1f 74 ec 40 31 2c ...
Decoded:
len=0x20 (32 bytes payload)
client_cc=9b1f74ec... (32 bytes, SHA256 proof)
SERVER -> CLIENT (33 bytes):
20 7d 59 b3 2e 28 6e 52 ...
Decoded:
len=0x20 (32 bytes payload)
server_cc=7d59b32e... (32 bytes, SHA256 proof)
POST-AUTH:
01 00 00 00 07 fa
Decoded:
AUTH_OK=01000000
UDP_port=0x07fa (2042)
```
## TCP Data Transfer
After handshake, data flows on the **same TCP connection** used for control.
@@ -163,7 +345,7 @@ graph LR
```
Offset Size Type Field
────── ──── ──── ─────
------ ---- ---- -----
0-3 4 uint32 BE sequence_number
4+ var bytes payload (zeros or random)
```
@@ -176,7 +358,7 @@ Exchanged every 1 second over the **TCP control channel** during UDP tests.
```
Offset Size Type Field Byte Order
────── ──── ──── ───── ──────────
------ ---- ---- ----- ----------
0 1 uint8 msg_type Always 0x07
1-4 4 uint32 BE seq_number Big-endian
5-7 3 bytes padding Always 00 00 00
@@ -208,11 +390,11 @@ sequenceDiagram
```
Server sends: 07 00 00 00 01 00 00 00 C0 2D B4 02
── ─────────── ──────── ───────────
-- ---------- -------- -----------
type seq=1 padding bytes=45,362,624
Client sends: 07 D9 00 00 01 00 00 00 00 00 00 00
── ─────────── ──────── ───────────
-- ---------- -------- -----------
type seq padding bytes=0
```
@@ -237,7 +419,7 @@ graph TD
For a target speed in bits/sec and packet size in bytes:
```
interval_ns = (1,000,000,000 × packet_size × 8) / target_speed_bps
interval_ns = (1,000,000,000 * packet_size * 8) / target_speed_bps
```
**Special case**: If interval > 500ms, clamp to exactly 1 second. This replicates a MikroTik behavior where very slow speeds get normalized to 1 packet/second.
@@ -249,16 +431,19 @@ When `-n` / `--nat` flag is set, the client sends an empty UDP packet before sta
## Protocol Constants
```
BTEST_PORT = 2000 TCP control port
BTEST_UDP_PORT_START = 2001 First UDP data port
BTEST_PORT_CLIENT_OFFSET = 256 Client UDP port offset
BTEST_PORT = 2000 TCP control port
BTEST_UDP_PORT_START = 2001 First UDP data port
BTEST_PORT_CLIENT_OFFSET = 256 Client UDP port offset
HELLO = [01 00 00 00]
AUTH_OK = [01 00 00 00]
AUTH_REQUIRED = [02 00 00 00]
AUTH_EC_SRP5 = [03 00 00 00]
AUTH_FAILED = [00 00 00 00]
HELLO = [01 00 00 00]
AUTH_OK = [01 00 00 00]
AUTH_REQUIRED = [02 00 00 00]
AUTH_EC_SRP5 = [03 00 00 00]
AUTH_FAILED = [00 00 00 00]
STATUS_MSG_TYPE = 0x07
STATUS_MSG_SIZE = 12 bytes
STATUS_MSG_TYPE = 0x07
STATUS_MSG_SIZE = 12 bytes
DEFAULT_TCP_TX_SIZE = 32768 (0x8000)
DEFAULT_UDP_TX_SIZE = 1500 (0x05DC)
```

View File

@@ -14,21 +14,29 @@ btest -c 192.168.88.1 -r
Run btest-rs as a server and let MikroTik devices connect for bandwidth testing.
### Basic Server
### Basic Server (No Authentication)
```bash
btest -s
```
Listens on TCP port 2000 (default). Any MikroTik device can connect without authentication.
Listens on all IPv4 interfaces, TCP port 2000. Any MikroTik device can connect without credentials.
### Server with Authentication
### Server with MD5 Authentication
```bash
btest -s -a admin -p mysecretpassword
```
MikroTik devices must provide matching credentials. Uses MD5 challenge-response authentication.
Requires connecting devices to provide matching credentials. Uses MD5 double-hash challenge-response authentication, compatible with RouterOS versions before 6.43.
### Server with EC-SRP5 Authentication
```bash
btest -s -a admin -p mysecretpassword --ecsrp5
```
Advertises EC-SRP5 (Curve25519 Weierstrass) authentication to connecting clients. Required for RouterOS >= 6.43 devices that use the modern authentication protocol.
### Custom Port
@@ -36,25 +44,90 @@ MikroTik devices must provide matching credentials. Uses MD5 challenge-response
btest -s -P 3000
```
### Custom Listen Address
```bash
# Listen only on a specific interface
btest -s --listen 10.0.0.1
# Disable IPv4, listen only on IPv6
btest -s --listen none --listen6
# Listen on both IPv4 and IPv6
btest -s --listen6
```
### IPv6 Listener (Experimental)
```bash
# IPv6 on default address (::)
btest -s --listen6
# IPv6 on a specific address
btest -s --listen6 fd00::1
```
TCP over IPv6 works fully on all platforms. UDP over IPv6 has issues on macOS due to kernel ENOBUFS limitations with `send_to()`. On Linux, IPv6 UDP works correctly.
### Syslog Integration
```bash
btest -s --syslog 192.168.1.1:514
```
Sends structured log events to a remote syslog server via UDP (RFC 3164 / BSD syslog format, facility local0). Events include:
- `AUTH_SUCCESS` -- successful authentication with peer address, username, and auth type
- `AUTH_FAILURE` -- failed authentication with peer address, username, auth type, and reason
- `TEST_START` -- test initiated with peer address, protocol, direction, and connection count
- `TEST_END` -- test completed with peer address, protocol, direction, duration, average speeds, bytes transferred, and lost packets
### CSV Output
```bash
btest -s --csv /var/log/btest-results.csv
```
Appends a row for each completed test to the specified CSV file. Creates the file with headers if it does not exist. CSV columns:
```
timestamp,host,port,protocol,direction,duration_s,tx_avg_mbps,rx_avg_mbps,tx_bytes,rx_bytes,lost_packets,auth_type
```
### Quiet Mode
```bash
btest -s --csv /var/log/btest.csv -q
```
Suppresses per-second terminal output. Useful when running as a background service with CSV or syslog logging only.
### Verbose/Debug Output
```bash
btest -s -v # Show connection info and debug messages
btest -s -vv # Show hex dumps of status exchange (for debugging)
btest -s -v # Debug messages (connection lifecycle, auth steps)
btest -s -vv # Trace messages (hex dumps of status exchange)
btest -s -vvv # Maximum verbosity
```
### MikroTik Configuration (connecting to our server)
### Combined Example
**Important: Always set Connection Count to 1.** Multi-connection mode is not supported and will cause severely degraded speeds.
```bash
btest -s -a admin -p secret --ecsrp5 --syslog 10.0.0.1:514 --csv /var/log/btest.csv -v
```
This runs a server with EC-SRP5 authentication, sends events to syslog, logs results to CSV, and prints debug output to the terminal.
### MikroTik Configuration (Connecting to Our Server)
On the MikroTik device (WinBox or CLI):
```
# CLI — always include connection-count=1
/tool/bandwidth-test address=<server-ip> direction=both protocol=udp user=admin password=mysecretpassword connection-count=1
/tool/bandwidth-test address=<server-ip> direction=both protocol=udp \
user=admin password=mysecretpassword
```
Or via WinBox: **Tools Bandwidth Test**, enter server address, credentials, **set Connection Count to 1**, and click Start.
Or via WinBox: **Tools > Bandwidth Test**, enter the server address and credentials, and click Start.
## Client Mode
@@ -62,28 +135,27 @@ Connect to a MikroTik device's built-in bandwidth test server.
### Prerequisites
Enable btest server on MikroTik:
Enable the btest server on the MikroTik device:
```
/tool/bandwidth-server set enabled=yes
```
**Note**: If the MikroTik uses RouterOS >= 6.43 with authentication enabled, you'll need to either disable auth or use credentials. EC-SRP5 auth is not yet supported; MD5 auth works on older RouterOS versions.
### Download Test (receive)
### Download Test (Receive)
```bash
btest -c 192.168.88.1 -r
```
Measures download speed from MikroTik to your machine.
Measures download speed from the MikroTik device to your machine. The server transmits, the client receives.
### Upload Test (transmit)
### Upload Test (Transmit)
```bash
btest -c 192.168.88.1 -t
```
Measures upload speed from your machine to MikroTik.
Measures upload speed from your machine to the MikroTik device. The client transmits, the server receives.
### Bidirectional Test
@@ -101,6 +173,8 @@ btest -c 192.168.88.1 -t -u # UDP upload
btest -c 192.168.88.1 -t -r -u # UDP bidirectional
```
UDP mode uses separate data ports (2001+ on the server side, 2257+ on the client side) and exchanges status messages every second over the TCP control channel.
### Bandwidth Limiting
```bash
@@ -109,15 +183,7 @@ btest -c 192.168.88.1 -t -b 1G # Limit to 1 Gbps
btest -c 192.168.88.1 -r -b 500K # Limit to 500 Kbps
```
### NAT Traversal
If you're behind NAT and need to receive UDP data:
```bash
btest -c 192.168.88.1 -r -u -n
```
The `-n` flag sends a probe packet to open the NAT firewall hole.
Suffixes: `K` (kilobits/sec), `M` (megabits/sec), `G` (gigabits/sec). Values are in bits per second.
### With Authentication
@@ -125,62 +191,187 @@ The `-n` flag sends a probe packet to open the NAT firewall hole.
btest -c 192.168.88.1 -r -a admin -p password
```
The client auto-detects the authentication type (MD5 or EC-SRP5) from the server's response and handles it accordingly.
### NAT Traversal
```bash
btest -c 192.168.88.1 -r -u -n
```
The `-n` flag sends an empty UDP probe packet before starting the receive thread. This opens a hole in NAT firewalls so the server's UDP data packets can reach the client.
### Timed Tests
```bash
btest -c 192.168.88.1 -r -d 30 # Run for 30 seconds, then stop
btest -c 192.168.88.1 -t -r -d 60 # 60-second bidirectional test
```
The default duration is 0 (unlimited). When the duration expires, the client exits cleanly.
### CSV Output (Client Mode)
```bash
btest -c 192.168.88.1 -r -d 30 --csv results.csv
```
Appends a summary row after the test completes with the host, port, protocol, direction, duration, and auth type.
### Quiet Mode (Client)
```bash
btest -c 192.168.88.1 -r -d 10 --csv results.csv -q
```
Suppresses per-second bandwidth output to the terminal. Useful for scripted or automated testing where only the CSV file matters.
### Custom Port
```bash
btest -c 192.168.88.1 -r -P 3000
```
## Reading the Output
```
[ 1] TX 264.50 Mbps (33062912 bytes)
[ 2] TX 263.98 Mbps (32997376 bytes)
[ 2] RX 263.98 Mbps (32997012 bytes)
[ 3] RX 430.51 Mbps (53813376 bytes) lost: 5
[ 1] TX 264.50 Mbps (33062912 bytes) cpu: 12%/0%
[ 2] TX 263.98 Mbps (32997376 bytes) cpu: 15%/33%
[ 2] RX 263.98 Mbps (32997012 bytes) cpu: 15%/33%
[ 3] RX 430.51 Mbps (53813376 bytes) lost: 5 cpu: 18%/45%
[ 4] RX 450.00 Mbps (56250000 bytes) cpu: 72%/85% !
```
| Field | Meaning |
|-------|---------|
| `[ N]` | Interval number (1 per second) |
| `TX` | Data we sent (upload) |
| `RX` | Data we received (download) |
| `TX` | Data sent (upload from your perspective) |
| `RX` | Data received (download from your perspective) |
| `Mbps` | Megabits per second |
| `bytes` | Raw bytes transferred in this interval |
| `lost: N` | UDP packets lost (UDP mode only) |
| `lost: N` | UDP packets lost in this interval (UDP mode only) |
| `cpu: L%/R%` | Local CPU / Remote CPU usage percentage |
| `!` | Warning: CPU usage exceeds 70% on either side |
## CLI Reference
## Complete CLI Reference
```
btest-rs MikroTik Bandwidth Test server & client in Rust
btest-rs -- MikroTik Bandwidth Test server & client in Rust
Usage: btest [OPTIONS]
Options:
-s, --server Run in server mode
-c, --client <HOST> Run in client mode, connect to HOST
-t, --transmit Client: upload test
-r, --receive Client: download test
-u, --udp Use UDP instead of TCP
-b, --bandwidth <BW> Bandwidth limit (e.g., 100M, 1G, 500K)
-P, --port <PORT> Port number [default: 2000]
-a, --authuser <USER> Authentication username
-p, --authpass <PASS> Authentication password
-n, --nat NAT traversal mode
-v, --verbose Increase log verbosity (-v, -vv)
-h, --help Show help
-V, --version Show version
-s, --server
Run in server mode. Listens for incoming connections from MikroTik
devices or other btest clients. Conflicts with -c.
-c, --client <HOST>
Run in client mode, connecting to the specified host. The host can be
an IPv4 address, IPv6 address, or hostname. Conflicts with -s.
-t, --transmit
Client transmits data (upload test). Tells the server to receive.
Can be combined with -r for bidirectional testing.
-r, --receive
Client receives data (download test). Tells the server to transmit.
Can be combined with -t for bidirectional testing.
-u, --udp
Use UDP instead of TCP for the data transfer. UDP uses separate data
ports (2001+ server side, 2257+ client side) and exchanges status
messages over the TCP control channel every second.
-b, --bandwidth <BW>
Target bandwidth limit for the test. Accepts suffixes: K (kilobits),
M (megabits), G (gigabits). Examples: 100M, 1G, 500K. Default is 0
(unlimited).
-P, --port <PORT>
TCP port to listen on (server mode) or connect to (client mode).
[default: 2000]
--listen <ADDR>
IPv4 address to bind the server listener to. Use "none" to disable
IPv4 listening entirely (useful with --listen6 for IPv6-only mode).
[default: 0.0.0.0]
--listen6 [<ADDR>]
Enable the IPv6 listener. If no address is given, binds to [::].
Experimental: TCP over IPv6 works fully on all platforms. UDP over
IPv6 has issues on macOS due to kernel ENOBUFS limitations.
-a, --authuser <USER>
Authentication username. In server mode, clients must provide this
username. In client mode, this is sent to the server.
-p, --authpass <PASS>
Authentication password. In server mode, clients must provide a
matching password. In client mode, this is used to authenticate.
--ecsrp5
Use EC-SRP5 authentication (Curve25519 Weierstrass). In server mode,
this advertises EC-SRP5 instead of MD5 to connecting clients.
Required for RouterOS >= 6.43. In client mode, auth type is
auto-detected and this flag is not needed.
-n, --nat
NAT traversal mode. Sends an empty UDP probe packet to the server
before starting the receive thread, opening a hole in NAT firewalls.
Only relevant for UDP receive tests behind NAT.
-d, --duration <SECS>
Test duration in seconds (client mode only). The client exits cleanly
after the specified time. A value of 0 means unlimited (run until
interrupted with Ctrl-C). [default: 0]
--csv <FILE>
Output test results to a CSV file. Appends a row per completed test.
Creates the file with a header row if it does not exist. Columns:
timestamp, host, port, protocol, direction, duration_s, tx_avg_mbps,
rx_avg_mbps, tx_bytes, rx_bytes, lost_packets, auth_type.
-q, --quiet
Suppress per-second bandwidth output to the terminal. Useful in
combination with --csv for machine-readable-only output, or when
running as a background service.
--syslog <HOST:PORT>
Send structured log events to a remote syslog server via UDP. Uses
RFC 3164 (BSD syslog) format with facility local0. Events include
AUTH_SUCCESS, AUTH_FAILURE, TEST_START, and TEST_END with detailed
metadata. Example: --syslog 192.168.1.1:514
-v, --verbose...
Increase log verbosity. Can be repeated:
-v debug messages (connection lifecycle, auth steps)
-vv trace messages (hex dumps of protocol exchange)
-vvv maximum verbosity
-h, --help
Print help information
-V, --version
Print version information
```
## Tips
- **Connection Count MUST be 1** when MikroTik connects to your server. Multi-connection mode is not supported and will cause speeds to drop to near zero. Single-connection performance is excellent (1+ Gbps).
- **TCP mode** generally gives more stable results than UDP due to TCP flow control.
- **UDP mode** is better for measuring raw link capacity without TCP overhead.
- **First interval** may show higher or lower numbers as the connection stabilizes. Look at intervals 3+ for steady-state throughput.
- **WiFi testing**: bidirectional tests on WiFi will show lower per-direction speeds because WiFi is half-duplex at the MAC layer.
- **Bandwidth limiting** applies to the direction you specify. In bidirectional mode with `-b 100M`, both directions are limited to 100 Mbps each.
## Troubleshooting
| Problem | Solution |
|---------|----------|
| `EC-SRP5 authentication not supported` | Disable auth on MikroTik btest server, or use older RouterOS |
| `Connection refused` | Check port 2000 is open, firewall allows it |
| Server shows 0 RX | Check MikroTik is actually sending (direction setting) |
| Speed drops over time (server mode) | Set Connection Count to 1 on MikroTik. Multi-connection is not supported |
| Very low speed with multiple connections | Multi-connection mode is broken — set Connection Count to 1 |
| UDP `lost` packets high | Network congestion or MTU issues, try reducing bandwidth with `-b` |
| Connection refused | Check that port 2000 is open and the server is running |
| Auth failure with EC-SRP5 | Ensure `--ecsrp5` is set on the server if the MikroTik client uses RouterOS >= 6.43 |
| Auth failure with MD5 | Verify username and password match exactly (case-sensitive) |
| Server shows 0 RX | Check that the MikroTik direction setting includes sending to the server |
| Very low UDP speed | Network congestion or MTU issues; try reducing bandwidth with `-b` |
| IPv6 UDP fails on macOS | Known macOS kernel limitation; use Linux for IPv6 UDP tests |
| Syslog messages not arriving | Verify the syslog server address and port, and check firewall rules for UDP 514 |
| CSV file not created | Check write permissions on the directory; the file is created on first use |

View File

@@ -0,0 +1,145 @@
#!/usr/bin/env python3
"""
Full MITM proxy for btest - forwards TCP control + UDP data.
Captures and logs ALL traffic between MikroTik client and MikroTik server.
Usage:
python3 btest_mitm_full.py --target 172.16.81.1
Then on MikroTik:
/tool/bandwidth-test address=<this_mac_ip> direction=receive protocol=tcp \
user=antar password=antar connection-count=1
"""
import socket
import select
import sys
import argparse
import time
import threading
import struct
def ts():
return time.strftime("%H:%M:%S", time.localtime()) + f".{int(time.time()*1000)%1000:03d}"
def hexline(data, offset=0, max_bytes=16):
chunk = data[offset:offset+max_bytes]
hex_part = " ".join(f"{b:02x}" for b in chunk)
ascii_part = "".join(chr(b) if 32 <= b < 127 else "." for b in chunk)
return f" {offset:04x} {hex_part:<48s} {ascii_part}"
def log_data(direction, data, conn_id=""):
label = f"[{ts()}] {direction}"
if conn_id:
label += f" [{conn_id}]"
label += f" ({len(data)} bytes)"
print(label)
# Show first 4 lines of hex
for i in range(0, min(len(data), 64), 16):
print(hexline(data, i))
if len(data) > 64:
print(f" ... ({len(data)} total)")
# Try to annotate
if len(data) == 4:
val = data.hex()
annotations = {
"01000000": "HELLO / AUTH_OK",
"02000000": "AUTH_REQUIRED (MD5)",
"03000000": "AUTH_REQUIRED (EC-SRP5)",
"00000000": "AUTH_FAILED",
}
if val in annotations:
print(f" >>> {annotations[val]}")
if len(data) == 12 and data[0] == 0x07:
# Status message
seq = int.from_bytes(data[1:5], "big")
recv_bytes = int.from_bytes(data[8:12], "little")
mbps = recv_bytes * 8 / 1_000_000
print(f" >>> STATUS: seq={seq} bytes_received={recv_bytes} ({mbps:.2f} Mbps)")
if len(data) == 16:
proto = "UDP" if data[0] == 0 else "TCP"
dirs = {1: "RX", 2: "TX", 3: "BOTH"}
d = dirs.get(data[1], f"0x{data[1]:02x}")
conn = data[3]
print(f" >>> COMMAND: proto={proto} dir={d} conn_count={conn}")
sys.stdout.flush()
def proxy_tcp(client_sock, target_host, target_port, conn_id):
"""Proxy a single TCP connection."""
try:
server_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_sock.settimeout(30)
server_sock.connect((target_host, target_port))
server_sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
except Exception as e:
print(f"[{conn_id}] Failed to connect to target: {e}")
client_sock.close()
return
try:
while True:
readable, _, _ = select.select([client_sock, server_sock], [], [], 30)
if not readable:
break
for sock in readable:
if sock is server_sock:
data = server_sock.recv(65536)
if not data:
return
log_data("SERVER→CLIENT", data, conn_id)
client_sock.sendall(data)
elif sock is client_sock:
data = client_sock.recv(65536)
if not data:
return
log_data("CLIENT→SERVER", data, conn_id)
server_sock.sendall(data)
except Exception as e:
print(f"[{conn_id}] Error: {e}")
finally:
client_sock.close()
server_sock.close()
print(f"[{conn_id}] Closed")
def main():
parser = argparse.ArgumentParser(description="btest full MITM proxy")
parser.add_argument("-t", "--target", required=True, help="Target MikroTik IP")
parser.add_argument("-l", "--listen", type=int, default=2000, help="Listen port")
args = parser.parse_args()
listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
listener.bind(("0.0.0.0", args.listen))
listener.listen(50)
print(f"MITM proxy: 0.0.0.0:{args.listen}{args.target}:2000")
print(f"Point MikroTik btest client at this machine")
print()
conn_num = 0
while True:
client_sock, client_addr = listener.accept()
client_sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
conn_num += 1
conn_id = f"TCP-{conn_num} {client_addr[0]}:{client_addr[1]}"
print(f"\n{'='*60}")
print(f"[{ts()}] New connection: {conn_id}")
t = threading.Thread(
target=proxy_tcp,
args=(client_sock, args.target, 2000, conn_id),
daemon=True,
)
t.start()
if __name__ == "__main__":
main()

50
scripts/push-docker-all.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Build and push Docker image to both Gitea and GitHub Container Registry.
#
# Prerequisites:
# docker login git.manko.yoga (Gitea — your username + token)
# docker login ghcr.io (GitHub — your username + PAT with packages:write)
#
# Usage:
# ./scripts/push-docker-all.sh v0.6.0
set -euo pipefail
cd "$(dirname "$0")/.."
if [[ -f .env ]]; then
set -a; source .env; set +a
fi
TAG="${1:?Usage: $0 <tag> (e.g. v0.6.0)}"
GITEA_IMAGE="git.manko.yoga/manawenuz/btest-rs"
GHCR_IMAGE="ghcr.io/manawenuz/btest-rs"
echo "=== Building Docker image ==="
docker build \
-t "${GITEA_IMAGE}:${TAG}" \
-t "${GITEA_IMAGE}:latest" \
-t "${GHCR_IMAGE}:${TAG}" \
-t "${GHCR_IMAGE}:latest" \
.
echo ""
echo "=== Pushing to Gitea ==="
docker push "${GITEA_IMAGE}:${TAG}"
docker push "${GITEA_IMAGE}:latest"
echo ""
echo "=== Pushing to GitHub Container Registry ==="
docker push "${GHCR_IMAGE}:${TAG}"
docker push "${GHCR_IMAGE}:latest"
echo ""
echo "Done! Images pushed:"
echo " ${GITEA_IMAGE}:${TAG}"
echo " ${GITEA_IMAGE}:latest"
echo " ${GHCR_IMAGE}:${TAG}"
echo " ${GHCR_IMAGE}:latest"
echo ""
echo "Pull with:"
echo " docker pull ${GHCR_IMAGE}:${TAG}"
echo " docker run --rm -p 2000:2000 -p 2001-2100:2001-2100/udp ${GHCR_IMAGE}:${TAG} -s -v"

120
scripts/sync-github-release.sh Executable file
View File

@@ -0,0 +1,120 @@
#!/usr/bin/env bash
# Sync a release from Gitea to GitHub.
# Downloads all binaries from Gitea release, creates GitHub release, uploads them.
#
# Prerequisites:
# gh auth login (GitHub CLI authenticated)
#
# Usage:
# ./scripts/sync-github-release.sh v0.6.0
set -euo pipefail
cd "$(dirname "$0")/.."
if [[ -f .env ]]; then
set -a; source .env; set +a
fi
TAG="${1:?Usage: $0 <tag> (e.g. v0.6.0)}"
GITEA_URL="https://git.manko.yoga"
GITEA_REPO="manawenuz/btest-rs"
GITHUB_REPO="manawenuz/btest-rs"
echo "=== Downloading assets from Gitea release ${TAG} ==="
mkdir -p /tmp/btest-release-${TAG}
cd /tmp/btest-release-${TAG}
rm -f *.tar.gz *.zip *.txt
# Get asset list from Gitea API
ASSETS=$(curl -sf "${GITEA_URL}/api/v1/repos/${GITEA_REPO}/releases/tags/${TAG}" | \
python3 -c "import sys,json; [print(a['browser_download_url']) for a in json.load(sys.stdin).get('assets',[])]")
if [ -z "$ASSETS" ]; then
echo "No assets found for ${TAG} on Gitea. Check if the release exists."
exit 1
fi
for url in $ASSETS; do
FILENAME=$(basename "$url")
echo " Downloading: $FILENAME"
curl -sLO "$url"
done
# Merge all separate .sha256 files into checksums-sha256.txt
# and remove the individual .sha256 files
echo ""
echo "=== Merging checksums ==="
for sha_file in *.sha256; do
[ -f "$sha_file" ] || continue
echo " Merging: $sha_file"
cat "$sha_file" >> checksums-sha256.txt
rm "$sha_file"
done
# Add checksums for any files not yet in checksums-sha256.txt
for f in *.tar.gz *.zip; do
[ -f "$f" ] || continue
if ! grep -q "$f" checksums-sha256.txt 2>/dev/null; then
echo " Adding checksum for: $f"
shasum -a 256 "$f" >> checksums-sha256.txt
fi
done
# Sort and deduplicate
sort -u -k2 checksums-sha256.txt > checksums-sha256.tmp && mv checksums-sha256.tmp checksums-sha256.txt
echo ""
echo "Checksums:"
cat checksums-sha256.txt
echo ""
echo "Files to upload:"
ls -lh *.tar.gz *.zip checksums-sha256.txt 2>/dev/null
echo ""
echo "=== Creating GitHub release ${TAG} ==="
gh release create "${TAG}" \
--repo "${GITHUB_REPO}" \
--title "btest-rs ${TAG}" \
--notes "## Downloads
| Platform | Architecture | File |
|----------|-------------|------|
| Linux | x86_64 | btest-linux-x86_64.tar.gz |
| Linux | aarch64 (RPi 64-bit) | btest-linux-aarch64.tar.gz |
| Linux | armv7 (RPi 32-bit) | btest-linux-armv7.tar.gz |
| Windows | x86_64 | btest-windows-x86_64.zip |
| macOS | aarch64 (Apple Silicon) | btest-darwin-aarch64.tar.gz |
| Docker | x86_64 | \`docker pull ghcr.io/manawenuz/btest-rs:${TAG}\` |
### Quick Install (Linux)
\`\`\`bash
curl -LO https://github.com/${GITHUB_REPO}/releases/download/${TAG}/btest-linux-x86_64.tar.gz
tar xzf btest-linux-x86_64.tar.gz
sudo mv btest /usr/local/bin/
\`\`\`
### Raspberry Pi
\`\`\`bash
# 64-bit
curl -LO https://github.com/${GITHUB_REPO}/releases/download/${TAG}/btest-linux-aarch64.tar.gz
tar xzf btest-linux-aarch64.tar.gz
sudo mv btest /usr/local/bin/
# 32-bit
curl -LO https://github.com/${GITHUB_REPO}/releases/download/${TAG}/btest-linux-armv7.tar.gz
tar xzf btest-linux-armv7.tar.gz
sudo mv btest /usr/local/bin/
\`\`\`
" \
./*.tar.gz ./*.zip ./*.txt 2>/dev/null || true
echo ""
echo "=== Done! ==="
echo "https://github.com/${GITHUB_REPO}/releases/tag/${TAG}"
# Cleanup
cd -
rm -rf /tmp/btest-release-${TAG}

64
scripts/test-aur-remote.sh Executable file
View File

@@ -0,0 +1,64 @@
#!/usr/bin/env bash
# Test the AUR package on a remote x86_64 Linux server using Docker.
#
# Usage:
# ./scripts/test-aur-remote.sh [user@host]
#
# Spins up an Arch container, installs btest-rs via yay (like a real user),
# runs loopback tests, cleans up.
set -euo pipefail
REMOTE="${1:-}"
TEST_SCRIPT='
docker run --rm archlinux:latest bash -c "
set -euo pipefail
echo \"[1/4] Installing yay...\"
pacman -Syu --noconfirm base-devel git sudo >/dev/null 2>&1
useradd -m builder
echo \"builder ALL=(ALL) NOPASSWD: ALL\" >> /etc/sudoers
su builder -c \"
cd /tmp
git clone https://aur.archlinux.org/yay-bin.git 2>/dev/null
cd yay-bin
makepkg -si --noconfirm 2>&1 | tail -3
\"
echo \"[2/4] Installing btest-rs from AUR via yay...\"
su builder -c \"yay -S btest-rs --noconfirm 2>&1 | tail -10\"
echo \"\"
echo \"[3/4] Verify installation...\"
btest --version
which btest
man -w btest.1 2>/dev/null && echo \"Man page: installed\" || echo \"Man page: not found\"
systemctl cat btest.service 2>/dev/null | head -3 && echo \"Systemd unit: installed\" || echo \"Systemd unit: not found\"
echo \"\"
echo \"[4/4] Loopback tests...\"
echo \"--- TCP (3s) ---\"
btest -s -P 19876 &
sleep 2
btest -c 127.0.0.1 -P 19876 -r -d 3
kill %1 2>/dev/null; wait 2>/dev/null || true
echo \"--- UDP (3s) ---\"
btest -s -P 19877 &
sleep 2
btest -c 127.0.0.1 -P 19877 -r -u -d 3
kill %1 2>/dev/null; wait 2>/dev/null || true
echo \"\"
echo \"=== ALL TESTS PASSED ===\"
"
'
if [ -n "$REMOTE" ]; then
echo "=== Testing AUR package on $REMOTE ==="
ssh "$REMOTE" "$TEST_SCRIPT"
else
echo "=== Testing AUR package locally ==="
eval "$TEST_SCRIPT"
fi

View File

@@ -1,4 +1,4 @@
use std::sync::atomic::{AtomicBool, AtomicU32, AtomicU64};
use std::sync::atomic::{AtomicBool, AtomicU8, AtomicU32, AtomicU64};
use std::sync::Arc;
use std::time::Duration;
@@ -13,6 +13,16 @@ pub struct BandwidthState {
pub rx_packets: AtomicU64,
pub rx_lost_packets: AtomicU64,
pub last_udp_seq: AtomicU32,
/// Cumulative totals (never reset by swap)
pub total_tx_bytes: AtomicU64,
pub total_rx_bytes: AtomicU64,
pub total_lost_packets: AtomicU64,
pub intervals: AtomicU32,
/// Remote peer's CPU usage (received via status messages)
pub remote_cpu: AtomicU8,
/// Remaining byte budget (TX + RX combined). When this reaches 0 the test
/// stops immediately. u64::MAX means unlimited (default for non-pro server).
pub byte_budget: AtomicU64,
}
impl BandwidthState {
@@ -26,8 +36,58 @@ impl BandwidthState {
rx_packets: AtomicU64::new(0),
rx_lost_packets: AtomicU64::new(0),
last_udp_seq: AtomicU32::new(0),
total_tx_bytes: AtomicU64::new(0),
total_rx_bytes: AtomicU64::new(0),
total_lost_packets: AtomicU64::new(0),
intervals: AtomicU32::new(0),
remote_cpu: AtomicU8::new(0),
byte_budget: AtomicU64::new(u64::MAX),
})
}
/// Record an interval's stats into cumulative totals.
pub fn record_interval(&self, tx: u64, rx: u64, lost: u64) {
use std::sync::atomic::Ordering::Relaxed;
self.total_tx_bytes.fetch_add(tx, Relaxed);
self.total_rx_bytes.fetch_add(rx, Relaxed);
self.total_lost_packets.fetch_add(lost, Relaxed);
self.intervals.fetch_add(1, Relaxed);
}
/// Try to spend `amount` bytes from the budget. Returns `true` if allowed,
/// `false` if the budget is exhausted (and sets `running = false`).
#[inline]
pub fn spend_budget(&self, amount: u64) -> bool {
use std::sync::atomic::Ordering::{Relaxed, SeqCst};
// Fast path: unlimited budget (non-pro server)
let current = self.byte_budget.load(Relaxed);
if current == u64::MAX {
return true;
}
if current < amount {
self.running.store(false, SeqCst);
return false;
}
self.byte_budget.fetch_sub(amount, Relaxed);
true
}
/// Set the byte budget (total bytes allowed for the entire test).
#[cfg(feature = "pro")]
pub fn set_budget(&self, budget: u64) {
self.byte_budget.store(budget, std::sync::atomic::Ordering::SeqCst);
}
/// Get summary for syslog reporting.
pub fn summary(&self) -> (u64, u64, u64, u32) {
use std::sync::atomic::Ordering::Relaxed;
(
self.total_tx_bytes.load(Relaxed),
self.total_rx_bytes.load(Relaxed),
self.total_lost_packets.load(Relaxed),
self.intervals.load(Relaxed),
)
}
}
/// Calculate the sleep interval between packets to achieve target bandwidth.
@@ -48,6 +108,34 @@ pub fn calc_send_interval(tx_speed_bps: u32, tx_size: u16) -> Option<Duration> {
}
}
/// Advance `next_send` by one interval and clamp drift.
///
/// When the sender falls behind (e.g., the write blocked longer than the
/// inter-packet interval), `next_send` accumulates a debt. Once the path
/// clears, the loop would fire packets with *no* delay until the debt is
/// repaid, producing a burst that overshoots the target rate.
///
/// This helper resets `next_send` to `now` whenever it has drifted more
/// than 2x the interval behind the current wall-clock time, bounding the
/// maximum burst to at most one extra interval's worth of packets.
pub fn advance_next_send(
next_send: &mut std::time::Instant,
iv: Duration,
now: std::time::Instant,
) -> Option<Duration> {
*next_send += iv;
// If we have fallen more than 2x the interval behind, reset to now
// to prevent a compensating burst.
if *next_send + iv < now {
*next_send = now;
}
if *next_send > now {
Some(*next_send - now)
} else {
None
}
}
/// Format a bandwidth value in human-readable form.
pub fn format_bandwidth(bits_per_sec: f64) -> String {
if bits_per_sec >= 1_000_000_000.0 {
@@ -94,6 +182,22 @@ pub fn print_status(
elapsed: Duration,
lost_packets: Option<u64>,
) {
print_status_with_cpu(interval_num, direction, bytes, elapsed, lost_packets, None, None);
}
pub fn print_status_with_cpu(
interval_num: u32,
direction: &str,
bytes: u64,
elapsed: Duration,
lost_packets: Option<u64>,
local_cpu: Option<u8>,
remote_cpu: Option<u8>,
) {
if crate::csv_output::is_quiet() {
return;
}
let secs = elapsed.as_secs_f64();
let bits = bytes as f64 * 8.0;
let bw = if secs > 0.0 { bits / secs } else { 0.0 };
@@ -103,13 +207,26 @@ pub fn print_status(
_ => String::new(),
};
let cpu_str = match (local_cpu, remote_cpu) {
(Some(l), Some(r)) => {
let warn = if l > 70 || r > 70 { " !" } else { "" };
format!(" cpu: {}%/{}%{}", l, r, warn)
}
(Some(l), None) => {
let warn = if l > 70 { " !" } else { "" };
format!(" cpu: {}%{}", l, warn)
}
_ => String::new(),
};
println!(
"[{:4}] {:>3} {} ({} bytes){}",
"[{:4}] {:>3} {} ({} bytes){}{}",
interval_num,
direction,
format_bandwidth(bw),
bytes,
loss_str,
cpu_str,
);
}

127
src/bin/client_only.rs Normal file
View File

@@ -0,0 +1,127 @@
//! btest-client: minimal bandwidth test client for embedded/OpenWrt systems.
//!
//! Stripped-down client that connects to MikroTik btest servers.
//! No server mode, no syslog, smaller binary footprint.
//!
//! Build: cargo build --profile release-small --bin btest-client
use clap::Parser;
use std::sync::atomic::Ordering;
#[derive(Parser)]
#[command(name = "btest-client", about = "MikroTik Bandwidth Test client", version)]
struct Cli {
/// Server address to connect to
#[arg(short = 'c', long = "client", required = true)]
host: String,
/// Transmit data (upload)
#[arg(short = 't', long = "transmit")]
transmit: bool,
/// Receive data (download)
#[arg(short = 'r', long = "receive")]
receive: bool,
/// Use UDP
#[arg(short = 'u', long = "udp")]
udp: bool,
/// Bandwidth limit (e.g., 100M)
#[arg(short = 'b', long = "bandwidth")]
bandwidth: Option<String>,
/// Port
#[arg(short = 'P', long = "port", default_value_t = 2000)]
port: u16,
/// Username
#[arg(short = 'a', long = "authuser")]
auth_user: Option<String>,
/// Password
#[arg(short = 'p', long = "authpass")]
auth_pass: Option<String>,
/// NAT mode
#[arg(short = 'n', long = "nat")]
nat: bool,
/// Duration in seconds (0=unlimited)
#[arg(short = 'd', long = "duration", default_value_t = 0)]
duration: u64,
/// Verbose
#[arg(short = 'v', long = "verbose", action = clap::ArgAction::Count)]
verbose: u8,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cli = Cli::parse();
let filter = match cli.verbose {
0 => "info",
1 => "debug",
_ => "trace",
};
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| tracing_subscriber::EnvFilter::new(filter)),
)
.with_target(false)
.init();
btest_rs::cpu::start_sampler();
if !cli.transmit && !cli.receive {
eprintln!("Error: specify -t (transmit) and/or -r (receive)");
std::process::exit(1);
}
let direction = match (cli.transmit, cli.receive) {
(true, false) => btest_rs::protocol::CMD_DIR_RX,
(false, true) => btest_rs::protocol::CMD_DIR_TX,
(true, true) => btest_rs::protocol::CMD_DIR_BOTH,
_ => unreachable!(),
};
let bw = match &cli.bandwidth {
Some(b) => btest_rs::bandwidth::parse_bandwidth(b)?,
None => 0,
};
let (tx_speed, rx_speed) = match direction {
btest_rs::protocol::CMD_DIR_TX => (bw, 0),
btest_rs::protocol::CMD_DIR_RX => (0, bw),
_ => (bw, bw),
};
let state = btest_rs::bandwidth::BandwidthState::new();
let state_clone = state.clone();
let host = cli.host.clone();
let client_fut = btest_rs::client::run_client(
&host, cli.port, direction, cli.udp,
tx_speed, rx_speed,
cli.auth_user, cli.auth_pass, cli.nat,
state_clone,
);
if cli.duration > 0 {
match tokio::time::timeout(
std::time::Duration::from_secs(cli.duration),
client_fut,
).await {
Ok(r) => { let _ = r?; }
Err(_) => {
state.running.store(false, Ordering::SeqCst);
}
}
} else {
let _ = client_fut.await?;
}
Ok(())
}

62
src/bin/server_only.rs Normal file
View File

@@ -0,0 +1,62 @@
//! btest-server: minimal bandwidth test server for embedded/OpenWrt systems.
//!
//! Stripped-down server that accepts MikroTik client connections.
//! No client mode, no syslog, no CSV, smaller binary footprint.
//!
//! Build: cargo build --profile release-small --bin btest-server
use clap::Parser;
#[derive(Parser)]
#[command(name = "btest-server", about = "MikroTik Bandwidth Test server", version)]
struct Cli {
/// Port
#[arg(short = 'P', long = "port", default_value_t = 2000)]
port: u16,
/// IPv4 listen address
#[arg(long = "listen", default_value = "0.0.0.0")]
listen_addr: String,
/// Username
#[arg(short = 'a', long = "authuser")]
auth_user: Option<String>,
/// Password
#[arg(short = 'p', long = "authpass")]
auth_pass: Option<String>,
/// Use EC-SRP5 authentication
#[arg(long = "ecsrp5")]
ecsrp5: bool,
/// Verbose
#[arg(short = 'v', long = "verbose", action = clap::ArgAction::Count)]
verbose: u8,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cli = Cli::parse();
let filter = match cli.verbose {
0 => "info",
1 => "debug",
_ => "trace",
};
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| tracing_subscriber::EnvFilter::new(filter)),
)
.with_target(false)
.init();
btest_rs::cpu::start_sampler();
let v4 = if cli.listen_addr.eq_ignore_ascii_case("none") { None } else { Some(cli.listen_addr) };
tracing::info!("btest-server starting on port {}", cli.port);
btest_rs::server::run_server(cli.port, cli.auth_user, cli.auth_pass, cli.ecsrp5, v4, None).await?;
Ok(())
}

View File

@@ -20,6 +20,7 @@ pub async fn run_client(
auth_user: Option<String>,
auth_pass: Option<String>,
nat_mode: bool,
shared_state: Arc<BandwidthState>,
) -> Result<()> {
let addr = format!("{}:{}", host, port);
tracing::info!("Connecting to {}...", addr);
@@ -37,29 +38,45 @@ pub async fn run_client(
send_command(&mut stream, &cmd).await?;
let resp = recv_response(&mut stream).await?;
match (auth_user.as_deref(), auth_pass.as_deref()) {
(Some(user), Some(pass)) => {
auth::client_authenticate(&mut stream, resp, user, pass).await?;
}
_ => {
if resp == AUTH_REQUIRED {
if resp == AUTH_OK {
// No auth required
} else if resp == AUTH_REQUIRED {
// MD5 auth
match (auth_user.as_deref(), auth_pass.as_deref()) {
(Some(user), Some(pass)) => {
auth::client_authenticate(&mut stream, resp, user, pass).await?;
}
_ => {
return Err(BtestError::Protocol(
"Server requires authentication but no credentials provided".into(),
"Server requires authentication but no credentials provided (-a/-p)".into(),
));
}
if resp == [0x03, 0x00, 0x00, 0x00] {
}
} else if resp == [0x03, 0x00, 0x00, 0x00] {
// EC-SRP5 auth (RouterOS >= 6.43)
match (auth_user.as_deref(), auth_pass.as_deref()) {
(Some(user), Some(pass)) => {
crate::ecsrp5::client_authenticate(&mut stream, user, pass).await?;
// After EC-SRP5, server sends AUTH_OK
let post_auth = recv_response(&mut stream).await?;
if post_auth != AUTH_OK {
return Err(BtestError::Protocol(format!(
"Unexpected post-EC-SRP5 response: {:02x?}",
post_auth
)));
}
}
_ => {
return Err(BtestError::Protocol(
"Server requires EC-SRP5 authentication (RouterOS >= 6.43) which is not yet supported. \
Try disabling authentication on the MikroTik btest server, or provide -a/-p credentials".into(),
"Server requires EC-SRP5 authentication. Provide credentials with -a/-p".into(),
));
}
if resp != AUTH_OK {
return Err(BtestError::Protocol(format!(
"Unexpected server response: {:02x?}",
resp
)));
}
}
} else {
return Err(BtestError::Protocol(format!(
"Unexpected server response: {:02x?}",
resp
)));
}
tracing::info!(
@@ -74,16 +91,15 @@ pub async fn run_client(
);
if use_udp {
run_udp_test_client(&mut stream, host, &cmd, nat_mode).await
run_udp_test_client(&mut stream, host, &cmd, nat_mode, shared_state).await
} else {
run_tcp_test_client(stream, cmd).await
run_tcp_test_client(stream, cmd, shared_state).await
}
}
// --- TCP Test Client ---
async fn run_tcp_test_client(stream: TcpStream, cmd: Command) -> Result<()> {
let state = BandwidthState::new();
async fn run_tcp_test_client(stream: TcpStream, cmd: Command, state: Arc<BandwidthState>) -> Result<()> {
let tx_size = cmd.tx_size as usize;
let client_should_tx = cmd.client_tx();
let client_should_rx = cmd.client_rx();
@@ -111,6 +127,12 @@ async fn run_tcp_test_client(stream: TcpStream, cmd: Command) -> Result<()> {
Some(tokio::spawn(async move {
tcp_client_rx_loop(reader, state_rx).await
}))
} else if client_should_tx {
// TX-only: still need to read the server's status messages to get remote CPU.
// Don't count these bytes as RX data.
Some(tokio::spawn(async move {
tcp_client_status_reader(reader, state_rx).await
}))
} else {
_reader_keepalive = Some(reader);
None
@@ -132,8 +154,7 @@ async fn tcp_client_tx_loop(
) {
tokio::time::sleep(Duration::from_millis(100)).await;
let mut packet = vec![0u8; tx_size];
packet[0] = STATUS_MSG_TYPE;
let packet = vec![0u8; tx_size]; // TCP data is all zeros
let mut interval = bandwidth::calc_send_interval(tx_speed, tx_size as u16);
let mut next_send = Instant::now();
@@ -152,10 +173,9 @@ async fn tcp_client_tx_loop(
match interval {
Some(iv) => {
next_send += iv;
let now = Instant::now();
if next_send > now {
tokio::time::sleep(next_send - now).await;
if let Some(delay) = bandwidth::advance_next_send(&mut next_send, iv, now) {
tokio::time::sleep(delay).await;
}
}
None => {
@@ -175,11 +195,53 @@ async fn tcp_client_rx_loop(
Ok(0) | Err(_) => break,
Ok(n) => {
state.rx_bytes.fetch_add(n as u64, Ordering::Relaxed);
// Scan for interleaved 12-byte status messages from the server.
// In BOTH mode, the server's TX loop injects status messages into the
// data stream. Status starts with 0x07 (STATUS_MSG_TYPE) and byte 1
// has the high bit set (0x80 | cpu%). Data packets are all zeros.
if n >= STATUS_MSG_SIZE {
for i in 0..=(n - STATUS_MSG_SIZE) {
if buf[i] == STATUS_MSG_TYPE && buf[i + 1] >= 0x80 {
let cpu = buf[i + 1] & 0x7F;
state.remote_cpu.store(cpu.min(100), Ordering::Relaxed);
break;
}
}
}
}
}
}
}
/// Read only status messages from the server (TX-only mode).
/// The server sends 12-byte status messages on the TCP connection even when
/// the client is only transmitting. We need to read them to get remote CPU
/// and to prevent the TCP receive buffer from filling up.
async fn tcp_client_status_reader(
mut reader: tokio::net::tcp::OwnedReadHalf,
state: Arc<BandwidthState>,
) {
let mut buf = [0u8; STATUS_MSG_SIZE];
while state.running.load(Ordering::Relaxed) {
match reader.read_exact(&mut buf).await {
Ok(_) => {
if buf[0] == STATUS_MSG_TYPE && buf[1] >= 0x80 {
let status = StatusMessage::deserialize(&buf);
state.remote_cpu.store(status.cpu_load, Ordering::Relaxed);
// Use server's bytes_received for TX speed adaptation
if status.bytes_received > 0 {
let new_speed =
((status.bytes_received as u64 * 8 * 3) / 2) as u32;
state.tx_speed.store(new_speed, Ordering::Relaxed);
state.tx_speed_changed.store(true, Ordering::Relaxed);
}
}
}
Err(_) => break,
}
}
}
// --- UDP Test Client ---
async fn run_udp_test_client(
@@ -187,6 +249,7 @@ async fn run_udp_test_client(
host: &str,
cmd: &Command,
nat_mode: bool,
state: Arc<BandwidthState>,
) -> Result<()> {
let mut port_buf = [0u8; 2];
stream.read_exact(&mut port_buf).await?;
@@ -198,9 +261,19 @@ async fn run_udp_test_client(
server_udp_port, client_udp_port,
);
let udp = UdpSocket::bind(format!("0.0.0.0:{}", client_udp_port)).await?;
let server_udp_addr: SocketAddr =
format!("{}:{}", host, server_udp_port).parse().unwrap();
// Detect IPv6 from the host address
let is_ipv6 = host.contains(':');
let bind_addr: SocketAddr = if is_ipv6 {
format!("[::]:{}", client_udp_port).parse().unwrap()
} else {
format!("0.0.0.0:{}", client_udp_port).parse().unwrap()
};
let udp = UdpSocket::bind(bind_addr).await?;
let server_udp_addr = if is_ipv6 {
SocketAddr::new(host.parse().unwrap(), server_udp_port)
} else {
format!("{}:{}", host, server_udp_port).parse().unwrap()
};
udp.connect(server_udp_addr).await?;
if nat_mode {
@@ -208,7 +281,6 @@ async fn run_udp_test_client(
udp.send(&[]).await?;
}
let state = BandwidthState::new();
let tx_size = cmd.tx_size as usize;
let client_should_tx = cmd.client_tx();
let client_should_rx = cmd.client_rx();
@@ -264,13 +336,19 @@ async fn udp_client_tx_loop(
state.tx_bytes.fetch_add(n as u64, Ordering::Relaxed);
consecutive_errors = 0;
}
Err(_) => {
Err(e) => {
consecutive_errors += 1;
if consecutive_errors > 1000 {
if consecutive_errors == 1 {
tracing::debug!("UDP TX send error: {} (target)", e);
}
if consecutive_errors > 50000 {
tracing::warn!("UDP TX: too many consecutive send errors, stopping");
break;
}
tokio::time::sleep(Duration::from_micros(200)).await;
let backoff = Duration::from_micros(
(200 + consecutive_errors.min(5000) as u64 * 10).min(10000)
);
tokio::time::sleep(backoff).await;
continue;
}
}
@@ -286,10 +364,9 @@ async fn udp_client_tx_loop(
match interval {
Some(iv) => {
next_send += iv;
let now = Instant::now();
if next_send > now {
tokio::time::sleep(next_send - now).await;
if let Some(delay) = bandwidth::advance_next_send(&mut next_send, iv, now) {
tokio::time::sleep(delay).await;
}
}
None => {
@@ -347,14 +424,17 @@ async fn client_status_loop(cmd: &Command, state: &BandwidthState) {
seq += 1;
if cmd.client_tx() {
let tx = state.tx_bytes.swap(0, Ordering::Relaxed);
bandwidth::print_status(seq, "TX", tx, Duration::from_secs(1), None);
}
let tx = if cmd.client_tx() { state.tx_bytes.swap(0, Ordering::Relaxed) } else { 0 };
let rx = if cmd.client_rx() { state.rx_bytes.swap(0, Ordering::Relaxed) } else { 0 };
state.record_interval(tx, rx, 0);
let local_cpu = crate::cpu::get();
let remote_cpu = state.remote_cpu.load(Ordering::Relaxed);
if cmd.client_tx() {
bandwidth::print_status_with_cpu(seq, "TX", tx, Duration::from_secs(1), None, Some(local_cpu), Some(remote_cpu));
}
if cmd.client_rx() {
let rx = state.rx_bytes.swap(0, Ordering::Relaxed);
bandwidth::print_status(seq, "RX", rx, Duration::from_secs(1), None);
bandwidth::print_status_with_cpu(seq, "RX", rx, Duration::from_secs(1), None, Some(local_cpu), Some(remote_cpu));
}
}
}
@@ -388,6 +468,7 @@ async fn udp_client_status_loop(
match tokio::time::timeout(wait_time, reader.read_exact(&mut status_buf)).await {
Ok(Ok(_)) => {
let server_status = StatusMessage::deserialize(&status_buf);
state.remote_cpu.store(server_status.cpu_load, Ordering::Relaxed);
if server_status.bytes_received > 0 && cmd.client_tx() {
let new_speed =
@@ -419,8 +500,9 @@ async fn udp_client_status_loop(
let rx_bytes = state.rx_bytes.swap(0, Ordering::Relaxed);
let tx_bytes = state.tx_bytes.swap(0, Ordering::Relaxed);
let lost = state.rx_lost_packets.swap(0, Ordering::Relaxed);
state.record_interval(tx_bytes, rx_bytes, lost);
let status = StatusMessage {
let status = StatusMessage { cpu_load: crate::cpu::get(),
seq,
bytes_received: rx_bytes as u32,
};
@@ -430,11 +512,13 @@ async fn udp_client_status_loop(
}
let _ = writer.flush().await;
let local_cpu = crate::cpu::get();
let remote_cpu = state.remote_cpu.load(Ordering::Relaxed);
if cmd.client_tx() {
bandwidth::print_status(seq, "TX", tx_bytes, Duration::from_secs(1), None);
bandwidth::print_status_with_cpu(seq, "TX", tx_bytes, Duration::from_secs(1), None, Some(local_cpu), Some(remote_cpu));
}
if cmd.client_rx() {
bandwidth::print_status(seq, "RX", rx_bytes, Duration::from_secs(1), Some(lost));
bandwidth::print_status_with_cpu(seq, "RX", rx_bytes, Duration::from_secs(1), Some(lost), Some(local_cpu), Some(remote_cpu));
}
}
}

215
src/cpu.rs Normal file
View File

@@ -0,0 +1,215 @@
//! Lightweight CPU usage measurement.
//!
//! Returns the system-wide CPU usage as a percentage (0-100).
//! Works on macOS, Linux, Windows, and FreeBSD without external dependencies.
use std::sync::atomic::{AtomicU8, Ordering};
use std::time::Duration;
static CURRENT_CPU: AtomicU8 = AtomicU8::new(0);
/// Start a background thread that samples CPU usage every second.
pub fn start_sampler() {
std::thread::spawn(|| {
let mut prev = get_cpu_times();
loop {
std::thread::sleep(Duration::from_secs(1));
let curr = get_cpu_times();
let usage = compute_usage(&prev, &curr);
CURRENT_CPU.store(usage, Ordering::Relaxed);
prev = curr;
}
});
}
/// Get the current CPU usage percentage (0-100).
pub fn get() -> u8 {
CURRENT_CPU.load(Ordering::Relaxed)
}
// --- Platform-specific implementation ---
#[cfg(any(target_os = "linux", target_os = "android"))]
fn get_cpu_times() -> (u64, u64) {
// Read /proc/stat: cpu user nice system idle iowait irq softirq steal
if let Ok(content) = std::fs::read_to_string("/proc/stat") {
if let Some(line) = content.lines().next() {
let parts: Vec<u64> = line
.split_whitespace()
.skip(1) // skip "cpu"
.filter_map(|s| s.parse().ok())
.collect();
if parts.len() >= 4 {
let idle = parts[3];
let total: u64 = parts.iter().sum();
return (total, idle);
}
}
}
(0, 0)
}
#[cfg(target_os = "macos")]
fn get_cpu_times() -> (u64, u64) {
// Use host_statistics to get CPU ticks
use std::mem::MaybeUninit;
extern "C" {
fn mach_host_self() -> u32;
fn host_statistics(
host: u32,
flavor: i32,
info: *mut i32,
count: *mut u32,
) -> i32;
}
const HOST_CPU_LOAD_INFO: i32 = 3;
const CPU_STATE_MAX: usize = 4;
unsafe {
let host = mach_host_self();
let mut info = MaybeUninit::<[u32; CPU_STATE_MAX]>::uninit();
let mut count: u32 = CPU_STATE_MAX as u32;
let ret = host_statistics(
host,
HOST_CPU_LOAD_INFO,
info.as_mut_ptr() as *mut i32,
&mut count,
);
if ret == 0 {
let ticks = info.assume_init();
// ticks: [user, system, idle, nice]
let user = ticks[0] as u64;
let system = ticks[1] as u64;
let idle = ticks[2] as u64;
let nice = ticks[3] as u64;
let total = user + system + idle + nice;
return (total, idle);
}
}
(0, 0)
}
#[cfg(target_os = "windows")]
fn get_cpu_times() -> (u64, u64) {
#[repr(C)]
#[derive(Default)]
#[allow(non_snake_case)]
struct FILETIME {
dwLowDateTime: u32,
dwHighDateTime: u32,
}
impl FILETIME {
fn to_u64(&self) -> u64 {
(self.dwHighDateTime as u64) << 32 | self.dwLowDateTime as u64
}
}
extern "system" {
fn GetSystemTimes(
lpIdleTime: *mut FILETIME,
lpKernelTime: *mut FILETIME,
lpUserTime: *mut FILETIME,
) -> i32;
}
let mut idle = FILETIME::default();
let mut kernel = FILETIME::default();
let mut user = FILETIME::default();
// SAFETY: We pass valid pointers to stack-allocated FILETIME structs.
// GetSystemTimes is a well-documented Win32 API that writes into these
// output parameters. A non-zero return value indicates success.
let ret = unsafe { GetSystemTimes(&mut idle, &mut kernel, &mut user) };
if ret != 0 {
let idle_ticks = idle.to_u64();
// Kernel time includes idle time on Windows, so total = kernel + user.
let total_ticks = kernel.to_u64() + user.to_u64();
(total_ticks, idle_ticks)
} else {
(0, 0)
}
}
#[cfg(target_os = "freebsd")]
fn get_cpu_times() -> (u64, u64) {
// kern.cp_time returns: user nice system interrupt idle
if let Ok(output) = std::process::Command::new("sysctl")
.arg("-n")
.arg("kern.cp_time")
.output()
{
if output.status.success() {
let text = String::from_utf8_lossy(&output.stdout);
let parts: Vec<u64> = text
.split_whitespace()
.filter_map(|s| s.parse().ok())
.collect();
if parts.len() >= 5 {
let user = parts[0];
let nice = parts[1];
let system = parts[2];
let interrupt = parts[3];
let idle = parts[4];
let total = user + nice + system + interrupt + idle;
return (total, idle);
}
}
}
(0, 0)
}
#[cfg(not(any(
target_os = "linux",
target_os = "android",
target_os = "macos",
target_os = "windows",
target_os = "freebsd",
)))]
fn get_cpu_times() -> (u64, u64) {
(0, 0) // Unsupported platform
}
fn compute_usage(prev: &(u64, u64), curr: &(u64, u64)) -> u8 {
let total_diff = curr.0.saturating_sub(prev.0);
let idle_diff = curr.1.saturating_sub(prev.1);
if total_diff == 0 {
return 0;
}
let busy = total_diff - idle_diff;
((busy * 100) / total_diff).min(100) as u8
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_cpu_times_returns_nonzero() {
let (total, idle) = get_cpu_times();
// On supported platforms, total should be > 0
if cfg!(any(
target_os = "linux",
target_os = "android",
target_os = "macos",
target_os = "windows",
target_os = "freebsd",
)) {
assert!(total > 0, "CPU total ticks should be > 0");
assert!(idle <= total, "idle should be <= total");
}
}
#[test]
fn test_compute_usage() {
assert_eq!(compute_usage(&(0, 0), &(100, 20)), 80);
assert_eq!(compute_usage(&(0, 0), &(100, 100)), 0);
assert_eq!(compute_usage(&(0, 0), &(100, 0)), 100);
assert_eq!(compute_usage(&(0, 0), &(0, 0)), 0);
}
}

86
src/csv_output.rs Normal file
View File

@@ -0,0 +1,86 @@
//! CSV output for machine-readable test results.
//!
//! Appends a row per test to the specified CSV file.
//! Creates the file with headers if it doesn't exist.
use std::fs::OpenOptions;
use std::io::Write;
use std::path::Path;
use std::sync::Mutex;
use std::time::SystemTime;
static CSV_FILE: Mutex<Option<String>> = Mutex::new(None);
static QUIET: std::sync::atomic::AtomicBool = std::sync::atomic::AtomicBool::new(false);
const HEADER: &str = "timestamp,host,port,protocol,direction,duration_s,tx_avg_mbps,rx_avg_mbps,tx_bytes,rx_bytes,lost_packets,local_cpu_pct,remote_cpu_pct,auth_type";
/// Initialize CSV output. Creates file with headers if needed.
pub fn init(path: &str) -> std::io::Result<()> {
let needs_header = !Path::new(path).exists() || std::fs::metadata(path)?.len() == 0;
if needs_header {
let mut f = OpenOptions::new().create(true).write(true).open(path)?;
writeln!(f, "{}", HEADER)?;
}
*CSV_FILE.lock().unwrap() = Some(path.to_string());
Ok(())
}
pub fn set_quiet(q: bool) {
QUIET.store(q, std::sync::atomic::Ordering::Relaxed);
}
pub fn is_quiet() -> bool {
QUIET.load(std::sync::atomic::Ordering::Relaxed)
}
/// Write a test result row to the CSV file.
pub fn write_result(
host: &str,
port: u16,
protocol: &str,
direction: &str,
duration_secs: u64,
tx_bytes: u64,
rx_bytes: u64,
lost_packets: u64,
local_cpu: u8,
remote_cpu: u8,
auth_type: &str,
) {
let guard = CSV_FILE.lock().unwrap();
if let Some(ref path) = *guard {
let tx_mbps = if duration_secs > 0 {
tx_bytes as f64 * 8.0 / duration_secs as f64 / 1_000_000.0
} else {
0.0
};
let rx_mbps = if duration_secs > 0 {
rx_bytes as f64 * 8.0 / duration_secs as f64 / 1_000_000.0
} else {
0.0
};
let now = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let row = format!(
"{},{},{},{},{},{},{:.2},{:.2},{},{},{},{},{},{}",
now, host, port, protocol, direction, duration_secs,
tx_mbps, rx_mbps, tx_bytes, rx_bytes, lost_packets,
local_cpu, remote_cpu, auth_type,
);
if let Ok(mut f) = OpenOptions::new().append(true).open(path) {
let _ = writeln!(f, "{}", row);
}
}
}
/// Check if CSV output is enabled.
pub fn is_enabled() -> bool {
CSV_FILE.lock().unwrap().is_some()
}

659
src/ecsrp5.rs Normal file
View File

@@ -0,0 +1,659 @@
//! EC-SRP5 authentication for MikroTik RouterOS >= 6.43.
//!
//! Implements the Curve25519-Weierstrass EC-SRP5 protocol used by MikroTik btest.
//! Based on research by Margin Research (Apache-2.0 License):
//! https://github.com/MarginResearch/mikrotik_authentication
//!
//! btest framing: `[len:1][payload]` (no 0x06 handler byte, unlike Winbox).
use num_bigint::BigUint;
use num_integer::Integer;
use num_traits::{One, Zero};
use sha2::{Digest, Sha256};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use crate::protocol::{BtestError, Result};
// --- Curve25519 parameters in Weierstrass form ---
fn p() -> BigUint {
BigUint::parse_bytes(
b"7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed",
16,
)
.unwrap()
}
fn curve_order() -> BigUint {
BigUint::parse_bytes(
b"1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed",
16,
)
.unwrap()
}
fn weierstrass_a() -> BigUint {
BigUint::parse_bytes(
b"2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa984914a144",
16,
)
.unwrap()
}
const MONT_A: u64 = 486662;
// --- Modular arithmetic ---
fn modinv(a: &BigUint, modulus: &BigUint) -> BigUint {
// Fermat's little theorem: a^(p-2) mod p
let exp = modulus - BigUint::from(2u32);
a.modpow(&exp, modulus)
}
fn legendre_symbol(a: &BigUint, p_val: &BigUint) -> i32 {
let exp = (p_val - BigUint::one()) / BigUint::from(2u32);
let l = a.modpow(&exp, p_val);
if l == p_val - BigUint::one() {
-1
} else if l == BigUint::zero() {
0
} else {
1
}
}
fn prime_mod_sqrt(a: &BigUint, p_val: &BigUint) -> Option<(BigUint, BigUint)> {
let a = a % p_val;
if a.is_zero() {
return Some((BigUint::zero(), BigUint::zero()));
}
if legendre_symbol(&a, p_val) != 1 {
return None;
}
// For p ≡ 5 (mod 8) — which is Curve25519's case — use Atkin's algorithm
// This is more reliable than Tonelli-Shanks for this specific case
let p_mod_8 = p_val % BigUint::from(8u32);
if p_mod_8 == BigUint::from(5u32) {
// v = (2a)^((p-5)/8) mod p
let exp = (p_val - BigUint::from(5u32)) / BigUint::from(8u32);
let two_a = (BigUint::from(2u32) * &a) % p_val;
let v = two_a.modpow(&exp, p_val);
// i = 2 * a * v^2 mod p
let i_val = (BigUint::from(2u32) * &a % p_val * &v % p_val * &v) % p_val;
// x = a * v * (i - 1) mod p
let i_minus_1 = if i_val >= BigUint::one() {
(&i_val - BigUint::one()) % p_val
} else {
(p_val - BigUint::one() + &i_val) % p_val
};
let x = (&a * &v % p_val * &i_minus_1) % p_val;
// Verify: x^2 ≡ a (mod p)
let check = (&x * &x) % p_val;
if check == a {
let other = p_val - &x;
return Some((x, other));
}
return None;
}
if p_mod_8 == BigUint::from(3u32) || p_mod_8 == BigUint::from(7u32) {
let exp = (p_val + BigUint::one()) / BigUint::from(4u32);
let x = a.modpow(&exp, p_val);
let other = p_val - &x;
return Some((x, other));
}
// General Tonelli-Shanks for other primes
let mut q = p_val - BigUint::one();
let mut s = 0u32;
while q.is_even() {
s += 1;
q >>= 1;
}
let mut z = BigUint::from(2u32);
while legendre_symbol(&z, p_val) != -1 {
z += BigUint::one();
}
let mut c = z.modpow(&q, p_val);
let mut x = a.modpow(&((&q + BigUint::one()) / BigUint::from(2u32)), p_val);
let mut t = a.modpow(&q, p_val);
let mut m = s;
while t != BigUint::one() {
let mut i = 1u32;
let mut tmp = (&t * &t) % p_val;
while tmp != BigUint::one() {
tmp = (&tmp * &tmp) % p_val;
i += 1;
}
let b = c.modpow(&BigUint::from(1u32 << (m - i - 1)), p_val);
x = (&x * &b) % p_val;
t = ((&t * &b % p_val) * &b) % p_val;
c = (&b * &b) % p_val;
m = i;
}
let other = p_val - &x;
Some((x, other))
}
// --- Weierstrass curve point ---
#[derive(Clone, Debug)]
struct Point {
x: BigUint,
y: BigUint,
infinity: bool,
}
impl Point {
fn infinity() -> Self {
Self {
x: BigUint::zero(),
y: BigUint::zero(),
infinity: true,
}
}
fn new(x: BigUint, y: BigUint) -> Self {
Self {
x,
y,
infinity: false,
}
}
fn add(&self, other: &Point) -> Point {
let p_val = p();
if self.infinity {
return other.clone();
}
if other.infinity {
return self.clone();
}
if self.x == other.x && self.y != other.y {
return Point::infinity();
}
let lam = if self.x == other.x && self.y == other.y {
// Point doubling
let three_x_sq = (BigUint::from(3u32) * &self.x * &self.x + &weierstrass_a()) % &p_val;
let two_y = (BigUint::from(2u32) * &self.y) % &p_val;
(three_x_sq * modinv(&two_y, &p_val)) % &p_val
} else {
// Point addition
let dy = if other.y >= self.y {
(&other.y - &self.y) % &p_val
} else {
(&p_val - (&self.y - &other.y) % &p_val) % &p_val
};
let dx = if other.x >= self.x {
(&other.x - &self.x) % &p_val
} else {
(&p_val - (&self.x - &other.x) % &p_val) % &p_val
};
(dy * modinv(&dx, &p_val)) % &p_val
};
let x3 = {
let lam_sq = (&lam * &lam) % &p_val;
let sum_x = (&self.x + &other.x) % &p_val;
if lam_sq >= sum_x {
(lam_sq - sum_x) % &p_val
} else {
(&p_val - (sum_x - lam_sq) % &p_val) % &p_val
}
};
let y3 = {
let dx = if self.x >= x3 {
(&self.x - &x3) % &p_val
} else {
(&p_val - (&x3 - &self.x) % &p_val) % &p_val
};
let prod = (&lam * dx) % &p_val;
if prod >= self.y {
(prod - &self.y) % &p_val
} else {
(&p_val - (&self.y - prod) % &p_val) % &p_val
}
};
Point::new(x3, y3)
}
fn scalar_mul(&self, scalar: &BigUint) -> Point {
let mut result = Point::infinity();
let mut base = self.clone();
let mut k = scalar.clone();
while !k.is_zero() {
if &k & &BigUint::one() == BigUint::one() {
result = result.add(&base);
}
base = base.add(&base);
k >>= 1;
}
result
}
}
// --- WCurve: Curve25519 in Weierstrass form ---
struct WCurve {
g: Point,
conversion_from_m: BigUint,
conversion_to_m: BigUint,
}
impl WCurve {
fn new() -> Self {
let p_val = p();
let mont_a = BigUint::from(MONT_A);
let three_inv = modinv(&BigUint::from(3u32), &p_val);
let conversion_from_m = (&mont_a * &three_inv) % &p_val;
let conversion_to_m = (&p_val - &conversion_from_m) % &p_val;
let mut curve = WCurve {
g: Point::infinity(),
conversion_from_m,
conversion_to_m,
};
curve.g = curve.lift_x(&BigUint::from(9u32), false);
curve
}
fn to_montgomery(&self, pt: &Point) -> ([u8; 32], u8) {
let p_val = p();
let x = (&pt.x + &self.conversion_to_m) % &p_val;
let parity = if pt.y.bit(0) { 1u8 } else { 0u8 };
let mut bytes = [0u8; 32];
let x_bytes = x.to_bytes_be();
let start = 32 - x_bytes.len().min(32);
bytes[start..].copy_from_slice(&x_bytes[..x_bytes.len().min(32)]);
(bytes, parity)
}
fn lift_x(&self, x_mont: &BigUint, parity: bool) -> Point {
let p_val = p();
let x = x_mont % &p_val;
// y^2 = x^3 + Ax^2 + x (Montgomery)
let y_squared = (&x * &x * &x + BigUint::from(MONT_A) * &x * &x + &x) % &p_val;
// Convert x to Weierstrass
let x_w = (&x + &self.conversion_from_m) % &p_val;
if let Some((y1, y2)) = prime_mod_sqrt(&y_squared, &p_val) {
let pt1 = Point::new(x_w.clone(), y1);
let pt2 = Point::new(x_w, y2);
if parity {
if pt1.y.bit(0) { pt1 } else { pt2 }
} else {
if !pt1.y.bit(0) { pt1 } else { pt2 }
}
} else {
Point::infinity()
}
}
fn gen_public_key(&self, priv_key: &[u8; 32]) -> ([u8; 32], u8) {
let scalar = BigUint::from_bytes_be(priv_key);
let pt = self.g.scalar_mul(&scalar);
self.to_montgomery(&pt)
}
fn redp1(&self, x_bytes: &[u8; 32], parity: bool) -> Point {
let mut x = sha256_bytes(x_bytes);
loop {
let x2 = sha256_bytes(&x);
let x_int = BigUint::from_bytes_be(&x2);
let pt = self.lift_x(&x_int, parity);
if !pt.infinity {
return pt;
}
let mut val = BigUint::from_bytes_be(&x);
val += BigUint::one();
x = bigint_to_32bytes(&val);
}
}
fn gen_password_validator_priv(
&self,
username: &str,
password: &str,
salt: &[u8; 16],
) -> [u8; 32] {
let inner = sha256_bytes(&format!("{}:{}", username, password).as_bytes().to_vec());
let mut input = Vec::with_capacity(16 + 32);
input.extend_from_slice(salt);
input.extend_from_slice(&inner);
sha256_bytes(&input)
}
}
fn sha256_bytes(data: &[u8]) -> [u8; 32] {
let mut hasher = Sha256::new();
hasher.update(data);
let result = hasher.finalize();
let mut out = [0u8; 32];
out.copy_from_slice(&result);
out
}
fn bigint_to_32bytes(val: &BigUint) -> [u8; 32] {
let bytes = val.to_bytes_be();
let mut out = [0u8; 32];
let start = 32usize.saturating_sub(bytes.len());
let copy_len = bytes.len().min(32);
out[start..start + copy_len].copy_from_slice(&bytes[bytes.len() - copy_len..]);
out
}
// --- EC-SRP5 Client Authentication ---
/// Perform EC-SRP5 authentication as a client.
/// Called after receiving `03 00 00 00` from the server.
pub async fn client_authenticate<S: AsyncReadExt + AsyncWriteExt + Unpin>(
stream: &mut S,
username: &str,
password: &str,
) -> Result<()> {
tracing::info!("Starting EC-SRP5 authentication");
let w = WCurve::new();
// Generate client ephemeral keypair
let s_a: [u8; 32] = rand::random();
let (x_w_a, x_w_a_parity) = w.gen_public_key(&s_a);
// MSG1: [len][username\0][pubkey:32][parity:1]
let mut payload = Vec::new();
payload.extend_from_slice(username.as_bytes());
payload.push(0x00);
payload.extend_from_slice(&x_w_a);
payload.push(x_w_a_parity);
let mut msg1 = vec![payload.len() as u8];
msg1.extend_from_slice(&payload);
stream.write_all(&msg1).await?;
stream.flush().await?;
tracing::debug!("EC-SRP5: sent client pubkey ({} bytes)", msg1.len());
// MSG2: [len][server_pubkey:32][parity:1][salt:16]
let mut resp_header = [0u8; 1];
stream.read_exact(&mut resp_header).await?;
let resp_len = resp_header[0] as usize;
let mut resp_data = vec![0u8; resp_len];
stream.read_exact(&mut resp_data).await?;
if resp_data.len() < 49 {
return Err(BtestError::Protocol(format!(
"EC-SRP5: server challenge too short ({} bytes)",
resp_data.len()
)));
}
let mut x_w_b = [0u8; 32];
x_w_b.copy_from_slice(&resp_data[0..32]);
let x_w_b_parity = resp_data[32] != 0;
let mut salt = [0u8; 16];
salt.copy_from_slice(&resp_data[33..49]);
tracing::debug!("EC-SRP5: received server challenge (salt={})", hex::encode(&salt));
// Compute shared secret
let i = w.gen_password_validator_priv(username, password, &salt);
let (x_gamma, _) = w.gen_public_key(&i);
let v = w.redp1(&x_gamma, true);
let w_b_point = w.lift_x(&BigUint::from_bytes_be(&x_w_b), x_w_b_parity);
let w_b_unblinded = w_b_point.add(&v);
let mut j_input = Vec::with_capacity(64);
j_input.extend_from_slice(&x_w_a);
j_input.extend_from_slice(&x_w_b);
let j = sha256_bytes(&j_input);
let i_int = BigUint::from_bytes_be(&i);
let j_int = BigUint::from_bytes_be(&j);
let s_a_int = BigUint::from_bytes_be(&s_a);
let order = curve_order();
let scalar = ((&i_int * &j_int) + &s_a_int) % &order;
let z_point = w_b_unblinded.scalar_mul(&scalar);
let (z, _) = w.to_montgomery(&z_point);
// MSG3: [len][client_cc:32]
let mut cc_input = Vec::with_capacity(64);
cc_input.extend_from_slice(&j);
cc_input.extend_from_slice(&z);
let client_cc = sha256_bytes(&cc_input);
let mut msg3 = vec![client_cc.len() as u8];
msg3.extend_from_slice(&client_cc);
stream.write_all(&msg3).await?;
stream.flush().await?;
tracing::debug!("EC-SRP5: sent client proof");
// MSG4: [len][server_cc:32]
let mut resp4_header = [0u8; 1];
stream.read_exact(&mut resp4_header).await?;
let resp4_len = resp4_header[0] as usize;
let mut server_cc_received = vec![0u8; resp4_len];
stream.read_exact(&mut server_cc_received).await?;
// Verify server confirmation
let mut sc_input = Vec::with_capacity(96);
sc_input.extend_from_slice(&j);
sc_input.extend_from_slice(&client_cc);
sc_input.extend_from_slice(&z);
let server_cc_expected = sha256_bytes(&sc_input);
if server_cc_received == server_cc_expected {
tracing::info!("EC-SRP5 authentication successful");
Ok(())
} else {
// Check if server sent an error message
if let Ok(msg) = std::str::from_utf8(&server_cc_received) {
Err(BtestError::Protocol(format!(
"EC-SRP5 authentication failed: {}",
msg
)))
} else {
Err(BtestError::AuthFailed)
}
}
}
// --- EC-SRP5 Server Authentication ---
/// Server-side EC-SRP5 credential store.
pub struct EcSrp5Credentials {
salt: [u8; 16],
x_gamma: [u8; 32],
gamma_parity: bool,
}
impl EcSrp5Credentials {
/// Derive EC-SRP5 credentials from username/password (done once at startup).
pub fn derive(username: &str, password: &str) -> Self {
let salt: [u8; 16] = rand::random();
let w = WCurve::new();
let i = w.gen_password_validator_priv(username, password, &salt);
let (x_gamma, parity) = w.gen_public_key(&i);
Self {
salt,
x_gamma,
gamma_parity: parity != 0,
}
}
}
/// Perform EC-SRP5 authentication as a server.
/// Called after sending `03 00 00 00` to the client.
pub async fn server_authenticate<S: AsyncReadExt + AsyncWriteExt + Unpin>(
stream: &mut S,
username: &str,
creds: &EcSrp5Credentials,
) -> Result<()> {
tracing::info!("Starting EC-SRP5 server authentication");
let w = WCurve::new();
// MSG1: read [len][username\0][pubkey:32][parity:1]
let mut len_buf = [0u8; 1];
stream.read_exact(&mut len_buf).await?;
let msg_len = len_buf[0] as usize;
let mut msg1_data = vec![0u8; msg_len];
stream.read_exact(&mut msg1_data).await?;
// Parse username
let null_pos = msg1_data.iter().position(|&b| b == 0)
.ok_or_else(|| BtestError::Protocol("EC-SRP5: no null terminator in username".into()))?;
let client_username = std::str::from_utf8(&msg1_data[..null_pos])
.map_err(|_| BtestError::Protocol("EC-SRP5: invalid username encoding".into()))?;
if client_username != username {
tracing::warn!("EC-SRP5: username mismatch (got '{}')", client_username);
return Err(BtestError::AuthFailed);
}
let key_start = null_pos + 1;
if msg1_data.len() < key_start + 33 {
return Err(BtestError::Protocol("EC-SRP5: client message too short".into()));
}
let mut x_w_a = [0u8; 32];
x_w_a.copy_from_slice(&msg1_data[key_start..key_start + 32]);
let x_w_a_parity = msg1_data[key_start + 32] != 0;
tracing::debug!("EC-SRP5: received client pubkey from '{}'", client_username);
// Generate server ephemeral keypair
let s_b: [u8; 32] = rand::random();
let s_b_int = BigUint::from_bytes_be(&s_b);
let pub_b = w.g.scalar_mul(&s_b_int);
// Compute password-entangled public key: W_b = s_b*G + redp1(x_gamma, 0)
let gamma = w.redp1(&creds.x_gamma, false);
let w_b = pub_b.add(&gamma);
let (x_w_b, x_w_b_parity) = w.to_montgomery(&w_b);
// MSG2: [len][server_pubkey:32][parity:1][salt:16]
let mut payload2 = Vec::with_capacity(49);
payload2.extend_from_slice(&x_w_b);
payload2.push(x_w_b_parity);
payload2.extend_from_slice(&creds.salt);
let mut msg2 = vec![payload2.len() as u8];
msg2.extend_from_slice(&payload2);
stream.write_all(&msg2).await?;
stream.flush().await?;
tracing::debug!("EC-SRP5: sent server challenge");
// Compute shared secret (server side: ECPESVDP-SRP-B)
let mut j_input = Vec::with_capacity(64);
j_input.extend_from_slice(&x_w_a);
j_input.extend_from_slice(&x_w_b);
let j = sha256_bytes(&j_input);
let j_int = BigUint::from_bytes_be(&j);
// Server ECPESVDP-SRP-B: Z = s_b * (W_a + j * gamma)
// gamma = lift_x(x_gamma, parity=1) — the raw validator public key point
// (NOT redp1 — that's used for blinding W_b, not for verification)
let w_a = w.lift_x(&BigUint::from_bytes_be(&x_w_a), x_w_a_parity);
let gamma = w.lift_x(&BigUint::from_bytes_be(&creds.x_gamma), creds.gamma_parity);
let j_gamma = gamma.scalar_mul(&j_int);
let sum = w_a.add(&j_gamma);
let z_point = sum.scalar_mul(&s_b_int);
let (z, _) = w.to_montgomery(&z_point);
// MSG3: read [len][client_cc:32]
let mut len3 = [0u8; 1];
stream.read_exact(&mut len3).await?;
let mut client_cc = vec![0u8; len3[0] as usize];
stream.read_exact(&mut client_cc).await?;
// Verify client confirmation
let mut cc_input = Vec::with_capacity(64);
cc_input.extend_from_slice(&j);
cc_input.extend_from_slice(&z);
let expected_cc = sha256_bytes(&cc_input);
if client_cc != expected_cc {
tracing::warn!("EC-SRP5: client proof mismatch");
return Err(BtestError::AuthFailed);
}
// MSG4: [len][server_cc:32]
let mut sc_input = Vec::with_capacity(96);
sc_input.extend_from_slice(&j);
sc_input.extend_from_slice(&client_cc);
sc_input.extend_from_slice(&z);
let server_cc = sha256_bytes(&sc_input);
let mut msg4 = vec![server_cc.len() as u8];
msg4.extend_from_slice(&server_cc);
stream.write_all(&msg4).await?;
stream.flush().await?;
tracing::info!("EC-SRP5 server authentication successful for '{}'", client_username);
Ok(())
}
mod hex {
pub fn encode(data: &[u8]) -> String {
data.iter().map(|b| format!("{:02x}", b)).collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_curve_generator() {
let w = WCurve::new();
assert!(!w.g.infinity);
// Generator from lift_x(9, false) should produce a valid point
let (x_mont, _) = w.to_montgomery(&w.g);
let x_int = BigUint::from_bytes_be(&x_mont);
assert_eq!(x_int, BigUint::from(9u32));
}
#[test]
fn test_pubkey_generation() {
let w = WCurve::new();
let priv_key = [1u8; 32];
let (pubkey, parity) = w.gen_public_key(&priv_key);
assert_ne!(pubkey, [0u8; 32]);
assert!(parity <= 1);
}
#[test]
fn test_password_validator() {
let w = WCurve::new();
let salt = [0x01u8, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10];
let i = w.gen_password_validator_priv("testuser", "testpass", &salt);
assert_ne!(i, [0u8; 32]);
// Deterministic: same inputs produce same output
let i2 = w.gen_password_validator_priv("testuser", "testpass", &salt);
assert_eq!(i, i2);
// Different password produces different result
let i3 = w.gen_password_validator_priv("testuser", "other", &salt);
assert_ne!(i, i3);
}
#[test]
fn test_redp1() {
let w = WCurve::new();
let input = [42u8; 32];
let pt = w.redp1(&input, false);
assert!(!pt.infinity);
}
#[test]
fn test_scalar_mul_identity() {
let w = WCurve::new();
let one = BigUint::one();
let pt = w.g.scalar_mul(&one);
assert_eq!(pt.x, w.g.x);
assert_eq!(pt.y, w.g.y);
}
}

View File

@@ -1,5 +1,9 @@
pub mod auth;
pub mod bandwidth;
pub mod client;
pub mod cpu;
pub mod csv_output;
pub mod ecsrp5;
pub mod protocol;
pub mod server;
pub mod syslog_logger;

View File

@@ -1,8 +1,12 @@
mod auth;
mod bandwidth;
mod client;
mod cpu;
pub mod csv_output;
mod ecsrp5;
mod protocol;
mod server;
pub mod syslog_logger;
use clap::Parser;
use tracing_subscriber::EnvFilter;
@@ -48,6 +52,14 @@ struct Cli {
#[arg(short = 'P', long = "port", default_value_t = BTEST_PORT)]
port: u16,
/// Listen address for IPv4 (default: 0.0.0.0, use "none" to disable)
#[arg(long = "listen", default_value = "0.0.0.0")]
listen_addr: String,
/// Enable IPv6 listener (experimental — TCP works, UDP has issues on macOS)
#[arg(long = "listen6", default_missing_value = "::", num_args = 0..=1)]
listen6_addr: Option<String>,
/// Authentication username
#[arg(short = 'a', long = "authuser")]
auth_user: Option<String>,
@@ -56,10 +68,30 @@ struct Cli {
#[arg(short = 'p', long = "authpass")]
auth_pass: Option<String>,
/// Use EC-SRP5 authentication (RouterOS >= 6.43 compatible)
#[arg(long = "ecsrp5")]
ecsrp5: bool,
/// NAT mode - send probe packet to open firewall
#[arg(short = 'n', long = "nat")]
nat: bool,
/// Test duration in seconds (client mode, 0=unlimited)
#[arg(short = 'd', long = "duration", default_value_t = 0)]
duration: u64,
/// Output results to CSV file (appends if exists)
#[arg(long = "csv")]
csv: Option<String>,
/// Suppress terminal output (use with --csv for machine-readable only)
#[arg(long = "quiet", short = 'q')]
quiet: bool,
/// Send logs to remote syslog server (e.g., 192.168.1.1:514)
#[arg(long = "syslog")]
syslog: Option<String>,
/// Verbose logging (repeat for more: -v, -vv, -vvv)
#[arg(short = 'v', long = "verbose", action = clap::ArgAction::Count)]
verbose: u8,
@@ -69,6 +101,9 @@ struct Cli {
async fn main() -> anyhow::Result<()> {
let cli = Cli::parse();
// Start CPU usage sampler
cpu::start_sampler();
// Set up logging based on verbosity
let filter = match cli.verbose {
0 => "info",
@@ -82,10 +117,27 @@ async fn main() -> anyhow::Result<()> {
.with_target(false)
.init();
// Initialize syslog if requested
if let Some(ref syslog_addr) = cli.syslog {
if let Err(e) = syslog_logger::init(syslog_addr) {
eprintln!("Warning: failed to initialize syslog to {}: {}", syslog_addr, e);
}
}
// Initialize CSV output if requested
if let Some(ref csv_path) = cli.csv {
if let Err(e) = csv_output::init(csv_path) {
eprintln!("Warning: failed to initialize CSV output to {}: {}", csv_path, e);
}
}
csv_output::set_quiet(cli.quiet);
if cli.server {
// Server mode
let v4 = if cli.listen_addr.eq_ignore_ascii_case("none") { None } else { Some(cli.listen_addr) };
let v6 = cli.listen6_addr; // None unless --listen6 is passed
tracing::info!("Starting btest server on port {}", cli.port);
server::run_server(cli.port, cli.auth_user, cli.auth_pass).await?;
server::run_server(cli.port, cli.auth_user, cli.auth_pass, cli.ecsrp5, v4, v6).await?;
} else if let Some(host) = cli.client {
// Client mode - must specify at least one direction
if !cli.transmit && !cli.receive {
@@ -116,18 +168,71 @@ async fn main() -> anyhow::Result<()> {
_ => (0, 0),
};
client::run_client(
let dir_str = match direction {
CMD_DIR_RX => "send",
CMD_DIR_TX => "receive",
CMD_DIR_BOTH => "both",
_ => "unknown",
};
let proto_str = if cli.udp { "UDP" } else { "TCP" };
// Create shared state that survives timeout cancellation
let shared_state = bandwidth::BandwidthState::new();
// Log test start
syslog_logger::test_start(&host, proto_str, dir_str, 0);
// Run client with optional duration timeout
let start = std::time::Instant::now();
let client_fut = client::run_client(
&host,
cli.port,
direction,
cli.udp,
tx_speed,
rx_speed,
cli.auth_user,
cli.auth_pass,
cli.auth_user.clone(),
cli.auth_pass.clone(),
cli.nat,
)
.await?;
shared_state.clone(),
);
if cli.duration > 0 {
match tokio::time::timeout(
std::time::Duration::from_secs(cli.duration),
client_fut,
)
.await
{
Ok(result) => { let _ = result?; },
Err(_) => {
// Timeout — signal stop
shared_state.running.store(false, std::sync::atomic::Ordering::SeqCst);
}
}
} else {
let _ = client_fut.await?;
}
let elapsed = start.elapsed().as_secs();
let (total_tx, total_rx, total_lost, _intervals) = shared_state.summary();
// Log test end to syslog
syslog_logger::test_end(
&host, proto_str, dir_str,
total_tx, total_rx, total_lost, elapsed as u32,
);
// Write CSV if enabled
if csv_output::is_enabled() {
let auth_type = if cli.auth_user.is_some() { "auth" } else { "none" };
let local_cpu = cpu::get();
let remote_cpu = shared_state.remote_cpu.load(std::sync::atomic::Ordering::Relaxed);
csv_output::write_result(
&host, cli.port, proto_str, dir_str,
elapsed, total_tx, total_rx, total_lost, local_cpu, remote_cpu, auth_type,
);
}
} else {
eprintln!("Error: Must specify either -s (server) or -c <host> (client)");
eprintln!("Run with --help for usage information.");

View File

@@ -137,23 +137,31 @@ impl Command {
pub struct StatusMessage {
pub seq: u32,
pub bytes_received: u32,
pub cpu_load: u8,
}
impl StatusMessage {
pub fn serialize(&self) -> [u8; STATUS_MSG_SIZE] {
let mut buf = [0u8; STATUS_MSG_SIZE];
buf[0] = STATUS_MSG_TYPE;
buf[1..5].copy_from_slice(&self.seq.to_be_bytes());
buf[5] = 0;
buf[6] = 0;
buf[7] = 0;
// Byte 1: CPU load with high bit set (MikroTik format: 0x80 | percentage)
buf[1] = 0x80 | (self.cpu_load & 0x7F);
buf[2] = 0;
buf[3] = 0;
// Bytes 4-7: sequence number (LE)
buf[4..8].copy_from_slice(&self.seq.to_le_bytes());
// Bytes 8-11: bytes received (LE)
buf[8..12].copy_from_slice(&self.bytes_received.to_le_bytes());
buf
}
pub fn deserialize(buf: &[u8; STATUS_MSG_SIZE]) -> Self {
// MikroTik encodes CPU with high bit set: actual = byte & 0x7F
let raw_cpu = buf[1];
let cpu = if raw_cpu > 128 { raw_cpu & 0x7F } else { raw_cpu };
Self {
seq: u32::from_be_bytes([buf[1], buf[2], buf[3], buf[4]]),
cpu_load: cpu.min(100),
seq: u32::from_le_bytes([buf[4], buf[5], buf[6], buf[7]]),
bytes_received: u32::from_le_bytes([buf[8], buf[9], buf[10], buf[11]]),
}
}
@@ -188,6 +196,7 @@ pub async fn send_command<W: AsyncWriteExt + Unpin>(
Ok(())
}
#[allow(dead_code)]
pub async fn recv_command<R: AsyncReadExt + Unpin>(reader: &mut R) -> Result<Command> {
let mut buf = [0u8; 16];
reader.read_exact(&mut buf).await?;

View File

@@ -26,28 +26,99 @@ pub async fn run_server(
port: u16,
auth_user: Option<String>,
auth_pass: Option<String>,
use_ecsrp5: bool,
listen_v4: Option<String>,
listen_v6: Option<String>,
) -> Result<()> {
let addr = format!("0.0.0.0:{}", port);
let listener = TcpListener::bind(&addr).await?;
tracing::info!("btest server listening on {}", addr);
// Pre-derive EC-SRP5 credentials if enabled
let ecsrp5_creds = if use_ecsrp5 {
match (auth_user.as_deref(), auth_pass.as_deref()) {
(Some(user), Some(pass)) => {
tracing::info!("EC-SRP5 authentication enabled for user '{}'", user);
Some(Arc::new(crate::ecsrp5::EcSrp5Credentials::derive(user, pass)))
}
_ => {
tracing::warn!("--ecsrp5 requires -a and -p to be set");
None
}
}
} else {
None
};
let udp_port_offset = Arc::new(std::sync::atomic::AtomicU16::new(0));
let sessions: SessionMap = Arc::new(Mutex::new(HashMap::new()));
// Bind IPv4 listener
let v4_listener = if let Some(ref addr) = listen_v4 {
let bind_addr = format!("{}:{}", addr, port);
match TcpListener::bind(&bind_addr).await {
Ok(l) => {
tracing::info!("Listening on {} (IPv4)", bind_addr);
Some(l)
}
Err(e) => {
tracing::error!("Failed to bind {}: {}", bind_addr, e);
None
}
}
} else {
None
};
// Bind IPv6 listener
let v6_listener = if let Some(ref addr) = listen_v6 {
let bind_addr = format!("[{}]:{}", addr, port);
match TcpListener::bind(&bind_addr).await {
Ok(l) => {
tracing::info!("Listening on {} (IPv6)", bind_addr);
Some(l)
}
Err(e) => {
tracing::error!("Failed to bind {}: {}", bind_addr, e);
None
}
}
} else {
None
};
if v4_listener.is_none() && v6_listener.is_none() {
return Err(crate::protocol::BtestError::Protocol(
"No listeners bound. Check --listen and --listen6 addresses.".into(),
));
}
loop {
let (stream, peer) = listener.accept().await?;
// Accept from whichever listener has a connection ready
let (stream, peer) = match (&v4_listener, &v6_listener) {
(Some(v4), Some(v6)) => {
tokio::select! {
r = v4.accept() => r?,
r = v6.accept() => r?,
}
}
(Some(v4), None) => v4.accept().await?,
(None, Some(v6)) => v6.accept().await?,
(None, None) => unreachable!(),
};
tracing::info!("New connection from {}", peer);
let auth_user = auth_user.clone();
let auth_pass = auth_pass.clone();
let udp_offset = udp_port_offset.clone();
let sessions = sessions.clone();
let ecsrp5 = ecsrp5_creds.clone();
tokio::spawn(async move {
if let Err(e) =
handle_client(stream, peer, auth_user, auth_pass, udp_offset, sessions).await
handle_client(stream, peer, auth_user, auth_pass, udp_offset, sessions, ecsrp5).await
{
tracing::error!("Client {} error: {}", peer, e);
let err_str = format!("{}", e);
tracing::error!("Client {} error: {}", peer, err_str);
if err_str.contains("uth") {
crate::syslog_logger::auth_failure(&peer.to_string(), "-", "-", &err_str);
}
}
});
}
@@ -60,6 +131,7 @@ async fn handle_client(
auth_pass: Option<String>,
udp_port_offset: Arc<std::sync::atomic::AtomicU16>,
sessions: SessionMap,
ecsrp5_creds: Option<Arc<crate::ecsrp5::EcSrp5Credentials>>,
) -> Result<()> {
stream.set_nodelay(true)?;
@@ -182,15 +254,41 @@ async fn handle_client(
}
// Primary connection auth
auth::server_authenticate(
&mut stream,
auth_user.as_deref(),
auth_pass.as_deref(),
&ok_response,
)
.await?;
if let Some(ref creds) = ecsrp5_creds {
// EC-SRP5 authentication
let auth_resp: [u8; 4] = [0x03, 0x00, 0x00, 0x00];
stream.write_all(&auth_resp).await?;
stream.flush().await?;
if cmd.is_udp() {
crate::ecsrp5::server_authenticate(
&mut stream,
auth_user.as_deref().unwrap_or("admin"),
creds,
)
.await?;
// Send auth OK (with session token if multi-conn)
stream.write_all(&ok_response).await?;
stream.flush().await?;
} else {
// MD5 or no auth
auth::server_authenticate(
&mut stream,
auth_user.as_deref(),
auth_pass.as_deref(),
&ok_response,
)
.await?;
}
// Log auth success and test start
let auth_type = if ecsrp5_creds.is_some() { "ecsrp5" } else if auth_user.is_some() { "md5" } else { "none" };
let proto_str = if cmd.is_udp() { "UDP" } else { "TCP" };
let dir_str = match cmd.direction { CMD_DIR_RX => "RX", CMD_DIR_TX => "TX", _ => "BOTH" };
crate::syslog_logger::auth_success(&peer.to_string(), auth_user.as_deref().unwrap_or("-"), auth_type);
crate::syslog_logger::test_start(&peer.to_string(), proto_str, dir_str, cmd.tcp_conn_count);
let result = if cmd.is_udp() {
run_udp_test_server(&mut stream, peer, &cmd, udp_port_offset).await
} else if is_tcp_multi {
let conn_count = cmd.tcp_conn_count;
@@ -235,24 +333,76 @@ async fn handle_client(
.unwrap_or_default()
};
let mut all_streams = vec![stream];
all_streams.extend(extra_streams);
tracing::info!(
"TCP multi-connection: starting with {} total streams",
1 + extra_streams.len(),
all_streams.len(),
);
// Run test - primary stream handles data, extras provide parallel TCP bandwidth
// For now just use the primary; extras keep the connection alive
let _extra_keepalive = extra_streams;
run_tcp_test_server(stream, cmd).await
run_tcp_multiconn_server(all_streams, cmd).await
} else {
run_tcp_test_server(stream, cmd).await
};
let (total_tx, total_rx, total_lost, intervals) = match &result {
Ok(summary) => *summary,
Err(_) => (0, 0, 0, 0),
};
crate::syslog_logger::test_end(
&peer.to_string(), proto_str, dir_str,
total_tx, total_rx, total_lost, intervals,
);
if crate::csv_output::is_enabled() {
crate::csv_output::write_result(
&peer.ip().to_string(), peer.port(), proto_str, dir_str,
intervals as u64, total_tx, total_rx, total_lost,
crate::cpu::get(), 0, auth_type,
);
}
result.map(|_| ())
}
// --- TCP Test Server ---
async fn run_tcp_test_server(stream: TcpStream, cmd: Command) -> Result<()> {
/// Public TX task for multi-connection use by server_pro.
#[cfg(feature = "pro")]
pub async fn tcp_tx_task(
writer: tokio::net::tcp::OwnedWriteHalf,
tx_size: usize,
tx_speed: u32,
state: Arc<BandwidthState>,
) {
tcp_tx_loop(writer, tx_size, tx_speed, state).await;
}
/// Public RX task for multi-connection use by server_pro.
#[cfg(feature = "pro")]
pub async fn tcp_rx_task(
reader: tokio::net::tcp::OwnedReadHalf,
state: Arc<BandwidthState>,
) {
tcp_rx_loop(reader, state).await;
}
/// Run a TCP bandwidth test on an already-authenticated stream.
/// Public API for use by server_pro.
#[cfg(feature = "pro")]
pub async fn run_tcp_test(
stream: TcpStream,
cmd: Command,
state: Arc<BandwidthState>,
) -> Result<(u64, u64, u64, u32)> {
run_tcp_test_inner(stream, cmd, state).await
}
async fn run_tcp_test_server(stream: TcpStream, cmd: Command) -> Result<(u64, u64, u64, u32)> {
let state = BandwidthState::new();
run_tcp_test_inner(stream, cmd, state).await
}
async fn run_tcp_test_inner(stream: TcpStream, cmd: Command, state: Arc<BandwidthState>) -> Result<(u64, u64, u64, u32)> {
let tx_size = cmd.tx_size as usize;
let server_should_tx = cmd.server_tx();
let server_should_rx = cmd.server_rx();
@@ -260,15 +410,26 @@ async fn run_tcp_test_server(stream: TcpStream, cmd: Command) -> Result<()> {
let (reader, writer) = stream.into_split();
// IMPORTANT: Do NOT drop unused halves - dropping sends TCP FIN
let mut _writer_keepalive = None;
let mut _reader_keepalive = None;
let state_tx = state.clone();
let tx_handle = if server_should_tx {
let tx_handle = if server_should_tx && server_should_rx {
// BOTH mode: TX data + inject status messages for the RX direction
Some(tokio::spawn(async move {
tcp_tx_with_status(writer, tx_size, tx_speed, state_tx).await
}))
} else if server_should_tx {
// TX only
Some(tokio::spawn(async move {
tcp_tx_loop(writer, tx_size, tx_speed, state_tx).await
}))
} else if server_should_rx {
// RX only: use writer for status messages
let st = state.clone();
Some(tokio::spawn(async move {
tcp_status_sender(writer, st).await
}))
} else {
_writer_keepalive = Some(writer);
None
@@ -284,12 +445,105 @@ async fn run_tcp_test_server(stream: TcpStream, cmd: Command) -> Result<()> {
None
};
status_report_loop(&cmd, &state).await;
if server_should_tx && !server_should_rx {
// TX-only: normal status loop reports TX stats
status_report_loop(&cmd, &state).await;
} else if server_should_tx && server_should_rx {
// BOTH: TX loop injects status + prints RX. Just report TX here.
let mut seq: u32 = 0;
let mut tick = tokio::time::interval(Duration::from_secs(1));
loop {
tick.tick().await;
if !state.running.load(Ordering::Relaxed) { break; }
seq += 1;
let tx = state.tx_bytes.swap(0, Ordering::Relaxed);
bandwidth::print_status(seq, "TX", tx, Duration::from_secs(1), None);
}
} else {
// RX-only: tcp_status_sender handles everything. Just wait.
while state.running.load(Ordering::Relaxed) {
tokio::time::sleep(Duration::from_millis(500)).await;
}
}
state.running.store(false, Ordering::SeqCst);
if let Some(h) = tx_handle { let _ = h.await; }
if let Some(h) = rx_handle { let _ = h.await; }
Ok(())
Ok(state.summary())
}
/// Public API for multi-connection TCP test with external state. Used by server_pro.
#[cfg(feature = "pro")]
pub async fn run_tcp_multiconn_test(
streams: Vec<TcpStream>,
cmd: Command,
state: Arc<BandwidthState>,
) -> Result<(u64, u64, u64, u32)> {
run_tcp_multiconn_inner(streams, cmd, state).await
}
/// TCP multi-connection.
async fn run_tcp_multiconn_server(streams: Vec<TcpStream>, cmd: Command) -> Result<(u64, u64, u64, u32)> {
let state = BandwidthState::new();
run_tcp_multiconn_inner(streams, cmd, state).await
}
async fn run_tcp_multiconn_inner(streams: Vec<TcpStream>, cmd: Command, state: Arc<BandwidthState>) -> Result<(u64, u64, u64, u32)> {
let tx_size = cmd.tx_size as usize;
let server_should_tx = cmd.server_tx();
let server_should_rx = cmd.server_rx();
let tx_speed = cmd.remote_tx_speed;
let mut tx_handles = Vec::new();
let mut rx_handles = Vec::new();
let mut _writer_keepalives: Vec<tokio::net::tcp::OwnedWriteHalf> = Vec::new();
let mut _reader_keepalives: Vec<tokio::net::tcp::OwnedReadHalf> = Vec::new();
for tcp_stream in streams {
let (reader, writer) = tcp_stream.into_split();
if server_should_tx && server_should_rx {
let st = state.clone();
tx_handles.push(tokio::spawn(async move {
tcp_tx_with_status(writer, tx_size, tx_speed, st).await
}));
} else if server_should_tx {
let st = state.clone();
tx_handles.push(tokio::spawn(async move {
tcp_tx_loop(writer, tx_size, tx_speed, st).await
}));
} else if server_should_rx {
let st = state.clone();
tx_handles.push(tokio::spawn(async move {
tcp_status_sender(writer, st).await
}));
} else {
_writer_keepalives.push(writer);
}
if server_should_rx {
let st = state.clone();
rx_handles.push(tokio::spawn(async move {
tcp_rx_loop(reader, st).await
}));
} else {
_reader_keepalives.push(reader);
}
}
tracing::info!(
"TCP multi-conn: {} TX tasks, {} RX tasks",
tx_handles.len(),
rx_handles.len(),
);
status_report_loop(&cmd, &state).await;
state.running.store(false, Ordering::SeqCst);
for h in tx_handles { let _ = h.await; }
for h in rx_handles { let _ = h.await; }
tracing::info!("TCP multi-connection test ended");
Ok(state.summary())
}
async fn tcp_tx_loop(
@@ -297,16 +551,59 @@ async fn tcp_tx_loop(
tx_size: usize,
tx_speed: u32,
state: Arc<BandwidthState>,
) {
tcp_tx_loop_inner(&mut writer, tx_size, tx_speed, &state, false).await;
}
/// TCP TX loop that also sends status messages when `send_status` is true.
/// Used in bidirectional mode where the writer handles both data and status.
async fn tcp_tx_with_status(
mut writer: tokio::net::tcp::OwnedWriteHalf,
tx_size: usize,
tx_speed: u32,
state: Arc<BandwidthState>,
) {
tcp_tx_loop_inner(&mut writer, tx_size, tx_speed, &state, true).await;
}
async fn tcp_tx_loop_inner(
writer: &mut tokio::net::tcp::OwnedWriteHalf,
tx_size: usize,
tx_speed: u32,
state: &Arc<BandwidthState>,
send_status: bool,
) {
tokio::time::sleep(Duration::from_millis(100)).await;
let mut packet = vec![0u8; tx_size];
packet[0] = STATUS_MSG_TYPE;
let packet = vec![0u8; tx_size];
let mut interval = bandwidth::calc_send_interval(tx_speed, tx_size as u16);
let mut next_send = Instant::now();
let mut next_status = Instant::now() + Duration::from_secs(1);
let mut status_seq: u32 = 0;
while state.running.load(Ordering::Relaxed) {
// Inject status message every ~1 second if in bidirectional mode
if send_status && Instant::now() >= next_status {
status_seq += 1;
let rx_bytes = state.rx_bytes.swap(0, Ordering::Relaxed);
let status = StatusMessage { cpu_load: crate::cpu::get(),
seq: status_seq,
bytes_received: rx_bytes as u32,
};
if writer.write_all(&status.serialize()).await.is_err() {
state.running.store(false, Ordering::SeqCst);
break;
}
state.record_interval(0, rx_bytes, 0);
bandwidth::print_status(status_seq, "RX", rx_bytes, Duration::from_secs(1), None);
next_status = Instant::now() + Duration::from_secs(1);
}
if !state.spend_budget(tx_size as u64) {
break;
}
if writer.write_all(&packet).await.is_err() {
state.running.store(false, Ordering::SeqCst);
break;
}
state.tx_bytes.fetch_add(tx_size as u64, Ordering::Relaxed);
@@ -320,10 +617,9 @@ async fn tcp_tx_loop(
match interval {
Some(iv) => {
next_send += iv;
let now = Instant::now();
if next_send > now {
tokio::time::sleep(next_send - now).await;
if let Some(delay) = bandwidth::advance_next_send(&mut next_send, iv, now) {
tokio::time::sleep(delay).await;
}
}
None => {
@@ -337,24 +633,92 @@ async fn tcp_rx_loop(mut reader: tokio::net::tcp::OwnedReadHalf, state: Arc<Band
let mut buf = vec![0u8; 65536];
while state.running.load(Ordering::Relaxed) {
match reader.read(&mut buf).await {
Ok(0) | Err(_) => break,
Ok(0) | Err(_) => {
state.running.store(false, Ordering::SeqCst);
break;
}
Ok(n) => {
if !state.spend_budget(n as u64) {
break;
}
state.rx_bytes.fetch_add(n as u64, Ordering::Relaxed);
}
}
}
}
/// Send periodic 12-byte status messages on the TCP connection.
/// Used when server is in RX mode — tells the client how many bytes we received.
/// Send periodic 12-byte status messages on the TCP connection AND print local stats.
/// Used when server is in RX-only mode. Replaces the normal status_report_loop
/// because it needs the writer and must own the rx_bytes swap.
async fn tcp_status_sender(
mut writer: tokio::net::tcp::OwnedWriteHalf,
state: Arc<BandwidthState>,
) {
let mut seq: u32 = 0;
let mut interval = tokio::time::interval(Duration::from_secs(1));
interval.tick().await;
while state.running.load(Ordering::Relaxed) {
interval.tick().await;
if !state.running.load(Ordering::Relaxed) {
break;
}
seq += 1;
// Swap to get bytes received this interval (atomic reset)
let rx_bytes = state.rx_bytes.swap(0, Ordering::Relaxed);
let status = StatusMessage { cpu_load: crate::cpu::get(),
seq,
bytes_received: rx_bytes as u32,
};
if writer.write_all(&status.serialize()).await.is_err() {
state.running.store(false, Ordering::SeqCst);
break;
}
let _ = writer.flush().await;
state.record_interval(0, rx_bytes, 0);
bandwidth::print_status(seq, "RX", rx_bytes, Duration::from_secs(1), None);
}
}
// --- UDP Test Server ---
/// Run a UDP bandwidth test on an already-authenticated stream.
/// Public API for use by server_pro. Caller provides the UDP port offset.
#[cfg(feature = "pro")]
pub async fn run_udp_test(
stream: &mut TcpStream,
peer: SocketAddr,
cmd: &Command,
state: Arc<BandwidthState>,
udp_port_start: u16,
) -> Result<(u64, u64, u64, u32)> {
run_udp_test_inner(stream, peer, cmd, state, udp_port_start).await
}
async fn run_udp_test_server(
stream: &mut TcpStream,
peer: SocketAddr,
cmd: &Command,
udp_port_offset: Arc<std::sync::atomic::AtomicU16>,
) -> Result<()> {
) -> Result<(u64, u64, u64, u32)> {
let offset = udp_port_offset.fetch_add(1, Ordering::SeqCst);
let server_udp_port = BTEST_UDP_PORT_START + offset;
let state = BandwidthState::new();
run_udp_test_inner(stream, peer, cmd, state, BTEST_UDP_PORT_START + offset).await
}
async fn run_udp_test_inner(
stream: &mut TcpStream,
peer: SocketAddr,
cmd: &Command,
state: Arc<BandwidthState>,
server_udp_port: u16,
) -> Result<(u64, u64, u64, u32)> {
let client_udp_port = server_udp_port + BTEST_PORT_CLIENT_OFFSET;
stream.write_all(&server_udp_port.to_be_bytes()).await?;
@@ -365,28 +729,62 @@ async fn run_udp_test_server(
server_udp_port, client_udp_port, peer,
);
let udp = UdpSocket::bind(format!("0.0.0.0:{}", server_udp_port)).await?;
let client_udp_addr: SocketAddr =
format!("{}:{}", peer.ip(), client_udp_port).parse().unwrap();
// Bind UDP on the same address family as the peer
let bind_addr: SocketAddr = if peer.is_ipv6() {
format!("[::]:{}", server_udp_port).parse().unwrap()
} else {
format!("0.0.0.0:{}", server_udp_port).parse().unwrap()
};
// Create socket with socket2 FIRST to set buffer sizes before tokio wraps it
let domain = if peer.is_ipv6() {
socket2::Domain::IPV6
} else {
socket2::Domain::IPV4
};
let sock2 = socket2::Socket::new(domain, socket2::Type::DGRAM, Some(socket2::Protocol::UDP))?;
sock2.set_nonblocking(true)?;
let _ = sock2.set_send_buffer_size(4 * 1024 * 1024);
let _ = sock2.set_recv_buffer_size(4 * 1024 * 1024);
if peer.is_ipv6() {
let _ = sock2.set_only_v6(true);
}
sock2.bind(&bind_addr.into())?;
tracing::debug!(
"UDP socket: sndbuf={}, rcvbuf={}",
sock2.send_buffer_size().unwrap_or(0),
sock2.recv_buffer_size().unwrap_or(0),
);
let udp = UdpSocket::from_std(sock2.into())?;
let client_udp_addr = SocketAddr::new(peer.ip(), client_udp_port);
// On IPv6, send a probe packet to trigger NDP neighbor resolution before blasting.
// macOS returns ENOBUFS on send_to() until the neighbor cache is populated.
if peer.is_ipv6() {
let _ = udp.send_to(&[0u8; 1], client_udp_addr).await;
tokio::time::sleep(Duration::from_millis(200)).await;
tracing::debug!("IPv6 NDP probe sent to {}", client_udp_addr);
}
// When connection_count > 1, MikroTik sends UDP from MULTIPLE source ports
// (base_port, base_port+1, ..., base_port+N-1) all to our single server port.
// A connect()'d UDP socket only accepts from the one connected address,
// silently dropping packets from the other ports.
// So: only connect() for single-connection mode (enables send() without addr).
// For multi-connection, we leave the socket unconnected and use send_to()/recv_from().
let multi_conn = cmd.tcp_conn_count > 0;
if !multi_conn {
// Only use unconnected socket for multi-connection mode (MikroTik sends
// from multiple source ports). For single-connection, always connect() —
// this is critical for IPv6 where send_to() hits ENOBUFS but send() works.
// recv_from() works fine on connected sockets for single source.
let use_unconnected = cmd.tcp_conn_count > 0;
if !use_unconnected {
udp.connect(client_udp_addr).await?;
}
tracing::info!(
"UDP mode: conn_count={}, socket={}",
cmd.tcp_conn_count.max(1),
if multi_conn { "unconnected (multi-port RX)" } else { "connected" },
if use_unconnected { "unconnected" } else { "connected" },
);
let state = BandwidthState::new();
let tx_size = cmd.tx_size as usize;
let server_should_tx = cmd.server_tx();
let server_should_rx = cmd.server_rx();
@@ -397,7 +795,7 @@ async fn run_udp_test_server(
let state_tx = state.clone();
let udp_tx = udp.clone();
let tx_target = client_udp_addr;
let is_multi = multi_conn;
let is_multi = use_unconnected;
let tx_handle = if server_should_tx {
Some(tokio::spawn(async move {
udp_tx_loop(&udp_tx, tx_size, tx_speed, state_tx, is_multi, tx_target).await
@@ -422,7 +820,7 @@ async fn run_udp_test_server(
state.running.store(false, Ordering::SeqCst);
if let Some(h) = tx_handle { let _ = h.await; }
if let Some(h) = rx_handle { let _ = h.await; }
Ok(())
Ok(state.summary())
}
async fn udp_tx_loop(
@@ -440,6 +838,10 @@ async fn udp_tx_loop(
let mut consecutive_errors: u32 = 0;
while state.running.load(Ordering::Relaxed) {
if !state.spend_budget(tx_size as u64) {
break;
}
packet[0..4].copy_from_slice(&seq.to_be_bytes());
let result = if multi_conn {
@@ -453,13 +855,20 @@ async fn udp_tx_loop(
state.tx_bytes.fetch_add(n as u64, Ordering::Relaxed);
consecutive_errors = 0;
}
Err(_) => {
Err(e) => {
consecutive_errors += 1;
if consecutive_errors > 1000 {
if consecutive_errors == 1 {
tracing::debug!("UDP TX send error: {} (target={})", e, target);
}
if consecutive_errors > 50000 {
tracing::warn!("UDP TX: too many consecutive send errors, stopping");
break;
}
tokio::time::sleep(Duration::from_micros(200)).await;
// Adaptive backoff: sleep longer as errors accumulate
let backoff = Duration::from_micros(
(200 + consecutive_errors.min(5000) as u64 * 10).min(10000)
);
tokio::time::sleep(backoff).await;
continue;
}
}
@@ -476,16 +885,23 @@ async fn udp_tx_loop(
match interval {
Some(iv) => {
next_send += iv;
let now = Instant::now();
if next_send > now {
tokio::time::sleep(next_send - now).await;
if let Some(delay) = bandwidth::advance_next_send(&mut next_send, iv, now) {
tokio::time::sleep(delay).await;
}
}
None => {
// Unlimited: yield every 64 packets to keep system responsive
if seq % 64 == 0 {
tokio::task::yield_now().await;
// "Unlimited" mode: still need minimal pacing to prevent
// macOS interface queue overflow (ENOBUFS).
// Yield every 16 packets; if errors seen, add real delay.
if seq % 16 == 0 {
if consecutive_errors > 0 {
// Back off enough for the NIC to drain
tokio::time::sleep(Duration::from_micros(50)).await;
consecutive_errors = 0; // reset after yielding
} else {
tokio::task::yield_now().await;
}
}
}
}
@@ -501,6 +917,9 @@ async fn udp_rx_loop(socket: &UdpSocket, state: Arc<BandwidthState>) {
// (multi-connection MikroTik sends from multiple ports)
match tokio::time::timeout(Duration::from_secs(5), socket.recv_from(&mut buf)).await {
Ok(Ok((n, _src))) if n >= 4 => {
if !state.spend_budget(n as u64) {
break;
}
state.rx_bytes.fetch_add(n as u64, Ordering::Relaxed);
state.rx_packets.fetch_add(1, Ordering::Relaxed);
@@ -541,14 +960,15 @@ async fn status_report_loop(cmd: &Command, state: &BandwidthState) {
seq += 1;
let tx = if cmd.server_tx() { state.tx_bytes.swap(0, Ordering::Relaxed) } else { 0 };
let rx = if cmd.server_rx() { state.rx_bytes.swap(0, Ordering::Relaxed) } else { 0 };
let lost = if cmd.server_rx() { state.rx_lost_packets.swap(0, Ordering::Relaxed) } else { 0 };
state.record_interval(tx, rx, lost);
if cmd.server_tx() {
let tx = state.tx_bytes.swap(0, Ordering::Relaxed);
bandwidth::print_status(seq, "TX", tx, Duration::from_secs(1), None);
}
if cmd.server_rx() {
let rx = state.rx_bytes.swap(0, Ordering::Relaxed);
let lost = state.rx_lost_packets.swap(0, Ordering::Relaxed);
let lost_opt = if cmd.is_udp() { Some(lost) } else { None };
bandwidth::print_status(seq, "RX", rx, Duration::from_secs(1), lost_opt);
}
@@ -588,9 +1008,10 @@ async fn udp_status_loop(
match tokio::time::timeout(wait_time, reader.read_exact(&mut status_buf)).await {
Ok(Ok(_)) => {
let client_status = StatusMessage::deserialize(&status_buf);
state.remote_cpu.store(client_status.cpu_load, Ordering::Relaxed);
tracing::debug!(
"RECV status: raw={:02x?} seq={} bytes_received={}",
&status_buf, client_status.seq, client_status.bytes_received,
"RECV status: raw={:02x?} seq={} bytes_received={} cpu={}%",
&status_buf, client_status.seq, client_status.bytes_received, client_status.cpu_load,
);
if client_status.bytes_received > 0 && cmd.server_tx() {
@@ -627,9 +1048,17 @@ async fn udp_status_loop(
let tx_bytes = state.tx_bytes.swap(0, Ordering::Relaxed);
let lost = state.rx_lost_packets.swap(0, Ordering::Relaxed);
let status = StatusMessage {
// Report bytes relevant to the active direction.
// When TX-only: report tx_bytes so client knows data is flowing.
// When RX or BOTH: report rx_bytes (how much we received from client).
let report_bytes = if cmd.server_tx() && !cmd.server_rx() {
tx_bytes
} else {
rx_bytes
};
let status = StatusMessage { cpu_load: crate::cpu::get(),
seq,
bytes_received: rx_bytes as u32,
bytes_received: report_bytes as u32,
};
let serialized = status.serialize();
tracing::debug!(
@@ -642,12 +1071,17 @@ async fn udp_status_loop(
}
let _ = writer.flush().await;
// Print local stats
// Print local stats and record totals
state.record_interval(tx_bytes, rx_bytes, lost);
if cmd.server_tx() {
bandwidth::print_status(seq, "TX", tx_bytes, Duration::from_secs(1), None);
let local_cpu = crate::cpu::get();
let remote_cpu = state.remote_cpu.load(Ordering::Relaxed);
bandwidth::print_status_with_cpu(seq, "TX", tx_bytes, Duration::from_secs(1), None, Some(local_cpu), Some(remote_cpu));
}
if cmd.server_rx() {
bandwidth::print_status(seq, "RX", rx_bytes, Duration::from_secs(1), Some(lost));
let local_cpu = crate::cpu::get();
let remote_cpu = state.remote_cpu.load(Ordering::Relaxed);
bandwidth::print_status_with_cpu(seq, "RX", rx_bytes, Duration::from_secs(1), Some(lost), Some(local_cpu), Some(remote_cpu));
}
}
}

411
src/server_pro/enforcer.rs Normal file
View File

@@ -0,0 +1,411 @@
//! Mid-session quota enforcement.
//!
//! Runs alongside a bandwidth test, periodically checking if the user
//! or IP has exceeded their quota. Terminates the test if so.
use std::net::IpAddr;
use std::sync::atomic::Ordering;
use std::sync::Arc;
use std::time::{Duration, Instant};
use btest_rs::bandwidth::BandwidthState;
use super::quota::{Direction, QuotaManager};
/// Enforces quotas during an active test session.
/// Call `run()` as a spawned task — it will set `state.running = false`
/// when a quota is exceeded or max_duration is reached.
pub struct QuotaEnforcer {
quota_mgr: QuotaManager,
username: String,
ip: IpAddr,
state: Arc<BandwidthState>,
check_interval: Duration,
max_duration: Duration,
}
#[derive(Debug, PartialEq)]
pub enum StopReason {
/// Test still running (not stopped)
Running,
/// Max duration reached
MaxDuration,
/// User daily quota exceeded
UserDailyQuota,
/// User weekly quota exceeded
UserWeeklyQuota,
/// User monthly quota exceeded
UserMonthlyQuota,
/// IP daily quota exceeded
IpDailyQuota,
/// IP weekly quota exceeded
IpWeeklyQuota,
/// IP monthly quota exceeded
IpMonthlyQuota,
/// Client disconnected normally
ClientDisconnected,
}
impl std::fmt::Display for StopReason {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Running => write!(f, "running"),
Self::MaxDuration => write!(f, "max_duration_reached"),
Self::UserDailyQuota => write!(f, "user_daily_quota_exceeded"),
Self::UserWeeklyQuota => write!(f, "user_weekly_quota_exceeded"),
Self::UserMonthlyQuota => write!(f, "user_monthly_quota_exceeded"),
Self::IpDailyQuota => write!(f, "ip_daily_quota_exceeded"),
Self::IpWeeklyQuota => write!(f, "ip_weekly_quota_exceeded"),
Self::IpMonthlyQuota => write!(f, "ip_monthly_quota_exceeded"),
Self::ClientDisconnected => write!(f, "client_disconnected"),
}
}
}
impl QuotaEnforcer {
pub fn new(
quota_mgr: QuotaManager,
username: String,
ip: IpAddr,
state: Arc<BandwidthState>,
check_interval_secs: u64,
max_duration_secs: u64,
) -> Self {
Self {
quota_mgr,
username,
ip,
state,
check_interval: Duration::from_secs(check_interval_secs.max(1)),
max_duration: if max_duration_secs > 0 {
Duration::from_secs(max_duration_secs)
} else {
Duration::from_secs(u64::MAX / 2) // effectively unlimited
},
}
}
/// Run the enforcer loop. Returns the reason the test was stopped.
/// This should be spawned as a tokio task.
pub async fn run(&self) -> StopReason {
let start = Instant::now();
let mut interval = tokio::time::interval(self.check_interval);
interval.tick().await; // consume first immediate tick
loop {
interval.tick().await;
// Check if test already ended normally
if !self.state.running.load(Ordering::Relaxed) {
return StopReason::ClientDisconnected;
}
// Check max duration
if start.elapsed() >= self.max_duration {
tracing::warn!(
"Max duration ({:?}) reached for user '{}' from {}",
self.max_duration, self.username, self.ip,
);
self.state.running.store(false, Ordering::SeqCst);
return StopReason::MaxDuration;
}
// Flush current session bytes to DB before checking
// (read without reset — totals accumulate, we just need current snapshot)
let session_tx = self.state.total_tx_bytes.load(Ordering::Relaxed);
let session_rx = self.state.total_rx_bytes.load(Ordering::Relaxed);
// Temporarily record session bytes so quota check sees them
// We use a separate "pending" record that gets finalized at session end
let ip_str = self.ip.to_string();
// Check user quotas
match self.check_user_with_session(session_tx, session_rx) {
StopReason::Running => {}
reason => {
tracing::warn!(
"Quota exceeded for user '{}' from {}: {} (session: tx={}, rx={})",
self.username, self.ip, reason, session_tx, session_rx,
);
self.state.running.store(false, Ordering::SeqCst);
return reason;
}
}
// Check IP quotas
match self.check_ip_with_session(&ip_str, session_tx, session_rx) {
StopReason::Running => {}
reason => {
tracing::warn!(
"IP quota exceeded for {} (user '{}'): {} (session: tx={}, rx={})",
self.ip, self.username, reason, session_tx, session_rx,
);
self.state.running.store(false, Ordering::SeqCst);
return reason;
}
}
}
}
fn check_user_with_session(&self, session_tx: u64, session_rx: u64) -> StopReason {
let session_total = session_tx + session_rx;
// Check against quota manager (which reads DB)
// The DB has usage from PREVIOUS sessions; we add current session bytes
if let Err(e) = self.quota_mgr.check_user(&self.username) {
// Already exceeded from previous sessions
return match format!("{}", e).as_str() {
s if s.contains("daily") => StopReason::UserDailyQuota,
s if s.contains("weekly") => StopReason::UserWeeklyQuota,
s if s.contains("monthly") => StopReason::UserMonthlyQuota,
_ => StopReason::UserDailyQuota,
};
}
// Also check if current session PLUS previous usage exceeds quota
// (check_user only sees DB, not current session bytes)
// This is handled by the quota_mgr.check_user reading from DB,
// and we periodically flush to DB during the session.
StopReason::Running
}
fn check_ip_with_session(&self, ip_str: &str, session_tx: u64, session_rx: u64) -> StopReason {
if let Err(e) = self.quota_mgr.check_ip(&self.ip, Direction::Both) {
return match format!("{}", e).as_str() {
s if s.contains("IP daily") => StopReason::IpDailyQuota,
s if s.contains("IP weekly") => StopReason::IpWeeklyQuota,
s if s.contains("IP monthly") => StopReason::IpMonthlyQuota,
s if s.contains("connections") => StopReason::IpDailyQuota, // reuse
_ => StopReason::IpDailyQuota,
};
}
StopReason::Running
}
/// Flush session bytes to DB. Call periodically and at session end.
pub fn flush_to_db(&self) {
let tx = self.state.total_tx_bytes.load(Ordering::Relaxed);
let rx = self.state.total_rx_bytes.load(Ordering::Relaxed);
// From server perspective: tx = outbound (we sent), rx = inbound (we received)
self.quota_mgr.record_usage(
&self.username,
&self.ip.to_string(),
rx, // inbound = what we received from client
tx, // outbound = what we sent to client
);
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::user_db::UserDb;
use crate::quota::QuotaManager;
fn setup_test_db() -> (UserDb, QuotaManager) {
let db = UserDb::open(":memory:").unwrap();
db.ensure_tables().unwrap();
db.add_user("testuser", "testpass").unwrap();
let qm = QuotaManager::new(
db.clone(),
1000, // daily: 1000 bytes
5000, // weekly
10000, // monthly
500, // ip daily (combined)
2000, // ip weekly (combined)
8000, // ip monthly (combined)
500, // ip_daily_inbound
500, // ip_daily_outbound
2000, // ip_weekly_inbound
2000, // ip_weekly_outbound
8000, // ip_monthly_inbound
8000, // ip_monthly_outbound
2, // max conn per ip
60, // max duration
);
(db, qm)
}
#[tokio::test]
async fn test_enforcer_max_duration() {
let (db, qm) = setup_test_db();
let state = BandwidthState::new();
let enforcer = QuotaEnforcer::new(
qm, "testuser".into(), "127.0.0.1".parse().unwrap(),
state.clone(), 1, 2, // check every 1s, max 2s
);
let reason = enforcer.run().await;
assert_eq!(reason, StopReason::MaxDuration);
assert!(!state.running.load(Ordering::Relaxed));
}
#[tokio::test]
async fn test_enforcer_client_disconnect() {
let (db, qm) = setup_test_db();
let state = BandwidthState::new();
let state_clone = state.clone();
// Stop the test after 500ms
tokio::spawn(async move {
tokio::time::sleep(Duration::from_millis(500)).await;
state_clone.running.store(false, Ordering::SeqCst);
});
let enforcer = QuotaEnforcer::new(
qm, "testuser".into(), "127.0.0.1".parse().unwrap(),
state, 1, 0, // check every 1s, no max duration
);
let reason = enforcer.run().await;
assert_eq!(reason, StopReason::ClientDisconnected);
}
#[tokio::test]
async fn test_enforcer_user_daily_quota_exceeded() {
let (db, qm) = setup_test_db();
// Pre-fill usage to exceed daily quota (1000 bytes)
db.record_usage("testuser", 600, 500).unwrap(); // 1100 > 1000
let state = BandwidthState::new();
let enforcer = QuotaEnforcer::new(
qm, "testuser".into(), "127.0.0.1".parse().unwrap(),
state.clone(), 1, 0,
);
let reason = enforcer.run().await;
assert_eq!(reason, StopReason::UserDailyQuota);
assert!(!state.running.load(Ordering::Relaxed));
}
#[tokio::test]
async fn test_enforcer_ip_daily_quota_exceeded() {
let (db, qm) = setup_test_db();
// Pre-fill IP usage to exceed IP daily quota (500 bytes)
db.record_ip_usage("127.0.0.1", 300, 300).unwrap(); // 600 > 500
let state = BandwidthState::new();
let enforcer = QuotaEnforcer::new(
qm, "testuser".into(), "127.0.0.1".parse().unwrap(),
state.clone(), 1, 0,
);
let reason = enforcer.run().await;
assert_eq!(reason, StopReason::IpDailyQuota);
assert!(!state.running.load(Ordering::Relaxed));
}
#[tokio::test]
async fn test_enforcer_under_quota_runs_normally() {
let (db, qm) = setup_test_db();
// Usage well under quota
db.record_usage("testuser", 100, 100).unwrap(); // 200 < 1000
let state = BandwidthState::new();
let state_clone = state.clone();
// Stop after 2s
tokio::spawn(async move {
tokio::time::sleep(Duration::from_secs(2)).await;
state_clone.running.store(false, Ordering::SeqCst);
});
let enforcer = QuotaEnforcer::new(
qm, "testuser".into(), "127.0.0.1".parse().unwrap(),
state, 1, 0,
);
let reason = enforcer.run().await;
assert_eq!(reason, StopReason::ClientDisconnected);
}
#[tokio::test]
async fn test_enforcer_flush_records_usage() {
let (db, qm) = setup_test_db();
let state = BandwidthState::new();
// Simulate some transfer
state.total_tx_bytes.store(5000, Ordering::Relaxed);
state.total_rx_bytes.store(3000, Ordering::Relaxed);
let enforcer = QuotaEnforcer::new(
qm, "testuser".into(), "127.0.0.1".parse().unwrap(),
state, 10, 0,
);
enforcer.flush_to_db();
// flush_to_db: total_tx=5000→outbound, total_rx=3000→inbound
// quota_mgr.record_usage(inbound=3000, outbound=5000)
// db.record_usage(tx=outbound=5000, rx=inbound=3000)
let (tx, rx) = db.get_daily_usage("testuser").unwrap();
assert_eq!(tx, 5000); // outbound (what server sent)
assert_eq!(rx, 3000); // inbound (what server received)
let (ip_in, ip_out) = db.get_ip_daily_usage("127.0.0.1").unwrap();
assert!(ip_in + ip_out > 0, "IP usage should be recorded");
}
#[test]
fn test_remaining_budget_calculation() {
let (db, qm) = setup_test_db();
let ip: IpAddr = "10.0.0.1".parse().unwrap();
// No usage yet: budget = min(daily=1000, weekly=5000, monthly=10000, ip_daily=500, ...)
// IP daily combined = 500 is the smallest
let budget = qm.remaining_budget("testuser", &ip);
assert_eq!(budget, 500, "budget should be min of all limits (ip_daily=500)");
// Use record_usage which properly records combined + directional
// inbound=200, outbound=200 → combined = 400
qm.record_usage("testuser", "10.0.0.1", 200, 200);
// IP daily combined: 500 - 400 = 100 remaining
// IP daily inbound: 500 - 200 = 300 remaining
// IP daily outbound: 500 - 200 = 300 remaining
// User daily: 1000 - 400 = 600 remaining
let budget = qm.remaining_budget("testuser", &ip);
assert_eq!(budget, 100, "budget should reflect IP combined remaining (100)");
}
#[test]
fn test_budget_zero_when_exhausted() {
let (db, qm) = setup_test_db();
let ip: IpAddr = "10.0.0.2".parse().unwrap();
// Exhaust user daily quota (1000 bytes)
db.record_usage("testuser", 600, 500).unwrap(); // 1100 > 1000
let budget = qm.remaining_budget("testuser", &ip);
assert_eq!(budget, 0, "budget should be 0 when user daily quota is exhausted");
}
#[test]
fn test_byte_budget_stops_transfer() {
let state = BandwidthState::new();
// Set a 1000-byte budget
state.set_budget(1000);
// Spend 500 bytes — should succeed
assert!(state.spend_budget(500));
// Spend another 400 — should succeed (100 remaining)
assert!(state.spend_budget(400));
// Spend 200 — should fail (only 100 remaining)
assert!(!state.spend_budget(200));
// running should be false
assert!(!state.running.load(Ordering::Relaxed));
}
#[test]
fn test_unlimited_budget_always_succeeds() {
let state = BandwidthState::new();
// Default budget is u64::MAX (unlimited)
// Should always succeed
for _ in 0..1000 {
assert!(state.spend_budget(1_000_000_000));
}
assert!(state.running.load(Ordering::Relaxed));
}
}

View File

@@ -0,0 +1,74 @@
//! LDAP/Active Directory authentication for btest-server-pro.
//!
//! Authenticates users against an LDAP directory using simple bind.
use ldap3::{LdapConnAsync, Scope, SearchEntry};
pub struct LdapConfig {
pub url: String,
pub base_dn: String,
pub bind_dn: Option<String>,
pub bind_pass: Option<String>,
}
pub struct LdapAuth {
config: LdapConfig,
}
impl LdapAuth {
pub fn new(config: LdapConfig) -> Self {
Self { config }
}
/// Authenticate a user by attempting an LDAP bind.
/// Returns Ok(true) if authentication succeeds.
pub async fn authenticate(&self, username: &str, password: &str) -> anyhow::Result<bool> {
let (conn, mut ldap) = LdapConnAsync::new(&self.config.url).await?;
ldap3::drive!(conn);
// If service account configured, bind first to search for user DN
let user_dn = if let (Some(ref bind_dn), Some(ref bind_pass)) =
(&self.config.bind_dn, &self.config.bind_pass)
{
let result = ldap.simple_bind(bind_dn, bind_pass).await?;
if result.rc != 0 {
tracing::warn!("LDAP service bind failed: rc={}", result.rc);
return Ok(false);
}
// Search for the user
let filter = format!(
"(&(objectClass=person)(|(uid={})(sAMAccountName={})(cn={})))",
username, username, username
);
let (results, _) = ldap
.search(&self.config.base_dn, Scope::Subtree, &filter, vec!["dn"])
.await?
.success()?;
if results.is_empty() {
tracing::debug!("LDAP user not found: {}", username);
return Ok(false);
}
let entry = SearchEntry::construct(results.into_iter().next().unwrap());
entry.dn
} else {
// No service account — construct DN directly
format!("uid={},{}", username, self.config.base_dn)
};
// Attempt user bind
let result = ldap.simple_bind(&user_dn, password).await?;
let success = result.rc == 0;
if success {
tracing::info!("LDAP auth successful for {} (dn={})", username, user_dn);
} else {
tracing::warn!("LDAP auth failed for {} (dn={}): rc={}", username, user_dn, result.rc);
}
let _ = ldap.unbind().await;
Ok(success)
}
}

343
src/server_pro/main.rs Normal file
View File

@@ -0,0 +1,343 @@
//! btest-server-pro: MikroTik Bandwidth Test server with multi-user, quotas, and LDAP.
//!
//! This is a superset of the standard `btest` server with additional features:
//! - SQLite user database (--users-db)
//! - Per-user and per-IP bandwidth quotas (daily/weekly)
//! - LDAP/Active Directory authentication (--ldap-url)
//! - Rate limiting for public server deployment
//!
//! Build with: cargo build --release --features pro --bin btest-server-pro
mod user_db;
mod quota;
mod enforcer;
mod server_loop;
mod web;
mod ldap_auth;
use clap::Parser;
use tracing_subscriber::EnvFilter;
#[derive(Parser, Debug)]
#[command(
name = "btest-server-pro",
about = "btest-rs Pro Server: multi-user, quotas, LDAP",
version,
)]
struct Cli {
/// Listen port
#[arg(short = 'P', long = "port", default_value_t = 2000)]
port: u16,
/// IPv4 listen address
#[arg(long = "listen", default_value = "0.0.0.0")]
listen_addr: String,
/// IPv6 listen address (optional)
#[arg(long = "listen6")]
listen6_addr: Option<String>,
/// SQLite user database path
#[arg(long = "users-db", default_value = "btest-users.db")]
users_db: String,
/// LDAP server URL (e.g., ldap://dc.example.com)
#[arg(long = "ldap-url")]
ldap_url: Option<String>,
/// LDAP base DN for user search
#[arg(long = "ldap-base-dn")]
ldap_base_dn: Option<String>,
/// LDAP bind DN (for service account)
#[arg(long = "ldap-bind-dn")]
ldap_bind_dn: Option<String>,
/// LDAP bind password
#[arg(long = "ldap-bind-pass")]
ldap_bind_pass: Option<String>,
/// Default daily quota per user in bytes (0 = unlimited)
#[arg(long = "daily-quota", default_value_t = 0)]
daily_quota: u64,
/// Default weekly quota per user in bytes (0 = unlimited)
#[arg(long = "weekly-quota", default_value_t = 0)]
weekly_quota: u64,
/// Default monthly quota per user in bytes (0 = unlimited)
#[arg(long = "monthly-quota", default_value_t = 0)]
monthly_quota: u64,
/// Daily bandwidth limit per IP in bytes (0 = unlimited)
#[arg(long = "ip-daily", default_value_t = 0)]
ip_daily: u64,
/// Weekly bandwidth limit per IP in bytes (0 = unlimited)
#[arg(long = "ip-weekly", default_value_t = 0)]
ip_weekly: u64,
/// Monthly bandwidth limit per IP in bytes (0 = unlimited)
#[arg(long = "ip-monthly", default_value_t = 0)]
ip_monthly: u64,
/// Maximum concurrent connections per IP (0 = unlimited)
#[arg(long = "max-conn-per-ip", default_value_t = 5)]
max_conn_per_ip: u32,
/// Maximum test duration in seconds (0 = unlimited)
#[arg(long = "max-duration", default_value_t = 300)]
max_duration: u64,
/// Daily inbound (client→server) limit per IP in bytes (0 = use --ip-daily)
#[arg(long = "ip-daily-in", default_value_t = 0)]
ip_daily_in: u64,
/// Daily outbound (server→client) limit per IP in bytes (0 = use --ip-daily)
#[arg(long = "ip-daily-out", default_value_t = 0)]
ip_daily_out: u64,
/// Weekly inbound limit per IP in bytes (0 = use --ip-weekly)
#[arg(long = "ip-weekly-in", default_value_t = 0)]
ip_weekly_in: u64,
/// Weekly outbound limit per IP in bytes (0 = use --ip-weekly)
#[arg(long = "ip-weekly-out", default_value_t = 0)]
ip_weekly_out: u64,
/// Monthly inbound limit per IP in bytes (0 = use --ip-monthly)
#[arg(long = "ip-monthly-in", default_value_t = 0)]
ip_monthly_in: u64,
/// Monthly outbound limit per IP in bytes (0 = use --ip-monthly)
#[arg(long = "ip-monthly-out", default_value_t = 0)]
ip_monthly_out: u64,
/// How often to check quotas during a test in seconds
#[arg(long = "quota-check-interval", default_value_t = 10)]
quota_check_interval: u64,
/// Web dashboard port (0 = disabled)
#[arg(long = "web-port", default_value_t = 8080)]
web_port: u16,
/// Shared password for public mode (all users use this password)
#[arg(long = "shared-password")]
shared_password: Option<String>,
/// Use EC-SRP5 authentication
#[arg(long = "ecsrp5")]
ecsrp5: bool,
/// Syslog server address
#[arg(long = "syslog")]
syslog: Option<String>,
/// CSV output file
#[arg(long = "csv")]
csv: Option<String>,
/// Verbose logging
#[arg(short = 'v', long = "verbose", action = clap::ArgAction::Count)]
verbose: u8,
/// User management subcommand
#[command(subcommand)]
command: Option<UserCommand>,
}
#[derive(clap::Subcommand, Debug)]
enum UserCommand {
/// Add a user
#[command(name = "useradd")]
UserAdd {
/// Username
username: String,
/// Password
password: String,
},
/// Delete a user
#[command(name = "userdel")]
UserDel {
/// Username
username: String,
},
/// List all users
#[command(name = "userlist")]
UserList,
/// Enable/disable a user
#[command(name = "userset")]
UserSet {
/// Username
username: String,
/// Enable (true/false)
#[arg(long)]
enabled: Option<bool>,
/// Daily quota in bytes
#[arg(long)]
daily: Option<i64>,
/// Weekly quota in bytes
#[arg(long)]
weekly: Option<i64>,
},
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cli = Cli::parse();
let filter = match cli.verbose {
0 => "info",
1 => "debug",
_ => "trace",
};
tracing_subscriber::fmt()
.with_env_filter(
EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new(filter)),
)
.with_target(false)
.init();
// Initialize subsystems
btest_rs::cpu::start_sampler();
if let Some(ref syslog_addr) = cli.syslog {
if let Err(e) = btest_rs::syslog_logger::init(syslog_addr) {
eprintln!("Warning: syslog init failed: {}", e);
}
}
if let Some(ref csv_path) = cli.csv {
if let Err(e) = btest_rs::csv_output::init(csv_path) {
eprintln!("Warning: CSV init failed: {}", e);
}
}
// Initialize user database
let db = user_db::UserDb::open(&cli.users_db)?;
db.ensure_tables()?;
// Handle user management subcommands (exit after)
if let Some(cmd) = &cli.command {
match cmd {
UserCommand::UserAdd { username, password } => {
db.add_user(username, password)?;
println!("User '{}' added.", username);
return Ok(());
}
UserCommand::UserDel { username } => {
if db.delete_user(username)? {
println!("User '{}' deleted.", username);
} else {
println!("User '{}' not found.", username);
}
return Ok(());
}
UserCommand::UserList => {
let users = db.list_users()?;
if users.is_empty() {
println!("No users.");
} else {
println!("{:<20} {:<10} {:<15} {:<15}", "USERNAME", "ENABLED", "DAILY_QUOTA", "WEEKLY_QUOTA");
println!("{}", "-".repeat(60));
for u in &users {
println!("{:<20} {:<10} {:<15} {:<15}",
u.username,
if u.enabled { "yes" } else { "no" },
if u.daily_quota == 0 { "default".to_string() } else { format!("{}B", u.daily_quota) },
if u.weekly_quota == 0 { "default".to_string() } else { format!("{}B", u.weekly_quota) },
);
}
}
return Ok(());
}
UserCommand::UserSet { username, enabled, daily, weekly } => {
if let Some(e) = enabled {
db.set_user_enabled(username, *e)?;
println!("User '{}' enabled={}", username, e);
}
if daily.is_some() || weekly.is_some() {
let d = daily.unwrap_or(0);
let w = weekly.unwrap_or(0);
db.set_user_quota(username, d, w, 0)?;
println!("User '{}' quota: daily={}, weekly={}", username, d, w);
}
return Ok(());
}
}
}
tracing::info!("User database: {} ({} users)", cli.users_db, db.user_count()?);
// Initialize LDAP if configured
if let Some(ref url) = cli.ldap_url {
tracing::info!("LDAP configured: {}", url);
}
// Initialize quota manager
// Directional flags override combined: --ip-daily-in > --ip-daily > unlimited
let or_fallback = |specific: u64, combined: u64| if specific > 0 { specific } else { combined };
let quota_mgr = quota::QuotaManager::new(
db.clone(),
cli.daily_quota,
cli.weekly_quota,
cli.monthly_quota,
cli.ip_daily,
cli.ip_weekly,
cli.ip_monthly,
or_fallback(cli.ip_daily_in, cli.ip_daily),
or_fallback(cli.ip_daily_out, cli.ip_daily),
or_fallback(cli.ip_weekly_in, cli.ip_weekly),
or_fallback(cli.ip_weekly_out, cli.ip_weekly),
or_fallback(cli.ip_monthly_in, cli.ip_monthly),
or_fallback(cli.ip_monthly_out, cli.ip_monthly),
cli.max_conn_per_ip,
cli.max_duration,
);
let fmt_q = |v: u64| if v == 0 { "unlimited".to_string() } else { format!("{}B", v) };
tracing::info!(
"User quotas: daily={}, weekly={}, monthly={}",
fmt_q(cli.daily_quota), fmt_q(cli.weekly_quota), fmt_q(cli.monthly_quota),
);
tracing::info!(
"IP quotas: daily={}, weekly={}, monthly={}",
fmt_q(cli.ip_daily), fmt_q(cli.ip_weekly), fmt_q(cli.ip_monthly),
);
tracing::info!(
"Limits: max_conn_per_ip={}, max_duration={}s",
cli.max_conn_per_ip, cli.max_duration,
);
// Start web dashboard if port > 0
if cli.web_port > 0 {
let web_db = db.clone();
let web_port = cli.web_port;
tokio::spawn(async move {
tracing::info!("Web dashboard starting on http://0.0.0.0:{}", web_port);
let app = web::create_router(web_db);
let listener = tokio::net::TcpListener::bind(format!("0.0.0.0:{}", web_port))
.await
.expect("Failed to bind web dashboard port");
if let Err(e) = axum::serve(listener, app).await {
tracing::error!("Web dashboard error: {}", e);
}
});
}
tracing::info!("btest-server-pro starting on port {}", cli.port);
let v4 = if cli.listen_addr.eq_ignore_ascii_case("none") { None } else { Some(cli.listen_addr) };
let v6 = cli.listen6_addr;
server_loop::run_pro_server(
cli.port,
cli.ecsrp5,
v4, v6,
db,
quota_mgr,
cli.quota_check_interval,
).await?;
Ok(())
}

470
src/server_pro/quota.rs Normal file
View File

@@ -0,0 +1,470 @@
//! Bandwidth quota management for btest-server-pro.
//!
//! Enforces per-user and per-IP bandwidth limits (daily/weekly/monthly),
//! with separate tracking for inbound (client-to-server) and outbound
//! (server-to-client) directions.
use std::collections::HashMap;
use std::net::IpAddr;
use std::sync::{Arc, Mutex};
use super::user_db::UserDb;
/// Traffic direction for bandwidth tests.
///
/// From the **server's** perspective:
/// - `Inbound` = client sends data to us (client TX, server RX)
/// - `Outbound` = we send data to the client (server TX, client RX)
/// - `Both` = bidirectional test
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Direction {
Inbound,
Outbound,
Both,
}
#[derive(Clone)]
pub struct QuotaManager {
db: UserDb,
/// Per-user defaults (0 = unlimited)
default_daily: u64,
default_weekly: u64,
default_monthly: u64,
/// Per-IP combined (inbound + outbound) limits (0 = unlimited) — for abuse prevention
ip_daily: u64,
ip_weekly: u64,
ip_monthly: u64,
/// Per-IP directional limits (0 = unlimited)
ip_daily_inbound: u64,
ip_daily_outbound: u64,
ip_weekly_inbound: u64,
ip_weekly_outbound: u64,
ip_monthly_inbound: u64,
ip_monthly_outbound: u64,
/// Max simultaneous connections from one IP
max_conn_per_ip: u32,
/// Max test duration in seconds
max_duration: u64,
active_connections: Arc<Mutex<HashMap<IpAddr, u32>>>,
}
#[derive(Debug)]
pub enum QuotaError {
DailyExceeded { used: u64, limit: u64 },
WeeklyExceeded { used: u64, limit: u64 },
MonthlyExceeded { used: u64, limit: u64 },
/// Combined (inbound + outbound) IP daily limit exceeded.
IpDailyExceeded { used: u64, limit: u64 },
/// Combined (inbound + outbound) IP weekly limit exceeded.
IpWeeklyExceeded { used: u64, limit: u64 },
/// Combined (inbound + outbound) IP monthly limit exceeded.
IpMonthlyExceeded { used: u64, limit: u64 },
/// Per-direction IP daily limits.
IpInboundDailyExceeded { used: u64, limit: u64 },
IpOutboundDailyExceeded { used: u64, limit: u64 },
/// Per-direction IP weekly limits.
IpInboundWeeklyExceeded { used: u64, limit: u64 },
IpOutboundWeeklyExceeded { used: u64, limit: u64 },
/// Per-direction IP monthly limits.
IpInboundMonthlyExceeded { used: u64, limit: u64 },
IpOutboundMonthlyExceeded { used: u64, limit: u64 },
TooManyConnections { current: u32, limit: u32 },
UserDisabled,
UserNotFound,
}
impl std::fmt::Display for QuotaError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::DailyExceeded { used, limit } =>
write!(f, "User daily quota exceeded: {}/{} bytes", used, limit),
Self::WeeklyExceeded { used, limit } =>
write!(f, "User weekly quota exceeded: {}/{} bytes", used, limit),
Self::MonthlyExceeded { used, limit } =>
write!(f, "User monthly quota exceeded: {}/{} bytes", used, limit),
Self::IpDailyExceeded { used, limit } =>
write!(f, "IP daily quota exceeded: {}/{} bytes", used, limit),
Self::IpWeeklyExceeded { used, limit } =>
write!(f, "IP weekly quota exceeded: {}/{} bytes", used, limit),
Self::IpMonthlyExceeded { used, limit } =>
write!(f, "IP monthly quota exceeded: {}/{} bytes", used, limit),
Self::IpInboundDailyExceeded { used, limit } =>
write!(f, "IP inbound daily quota exceeded: {}/{} bytes", used, limit),
Self::IpOutboundDailyExceeded { used, limit } =>
write!(f, "IP outbound daily quota exceeded: {}/{} bytes", used, limit),
Self::IpInboundWeeklyExceeded { used, limit } =>
write!(f, "IP inbound weekly quota exceeded: {}/{} bytes", used, limit),
Self::IpOutboundWeeklyExceeded { used, limit } =>
write!(f, "IP outbound weekly quota exceeded: {}/{} bytes", used, limit),
Self::IpInboundMonthlyExceeded { used, limit } =>
write!(f, "IP inbound monthly quota exceeded: {}/{} bytes", used, limit),
Self::IpOutboundMonthlyExceeded { used, limit } =>
write!(f, "IP outbound monthly quota exceeded: {}/{} bytes", used, limit),
Self::TooManyConnections { current, limit } =>
write!(f, "Too many connections from this IP: {}/{}", current, limit),
Self::UserDisabled => write!(f, "User account is disabled"),
Self::UserNotFound => write!(f, "User not found"),
}
}
}
impl QuotaManager {
#[allow(clippy::too_many_arguments)]
pub fn new(
db: UserDb,
default_daily: u64,
default_weekly: u64,
default_monthly: u64,
ip_daily: u64,
ip_weekly: u64,
ip_monthly: u64,
ip_daily_inbound: u64,
ip_daily_outbound: u64,
ip_weekly_inbound: u64,
ip_weekly_outbound: u64,
ip_monthly_inbound: u64,
ip_monthly_outbound: u64,
max_conn_per_ip: u32,
max_duration: u64,
) -> Self {
Self {
db,
default_daily,
default_weekly,
default_monthly,
ip_daily,
ip_weekly,
ip_monthly,
ip_daily_inbound,
ip_daily_outbound,
ip_weekly_inbound,
ip_weekly_outbound,
ip_monthly_inbound,
ip_monthly_outbound,
max_conn_per_ip,
max_duration,
active_connections: Arc::new(Mutex::new(HashMap::new())),
}
}
/// Check if a user is allowed to start a test.
pub fn check_user(&self, username: &str) -> Result<(), QuotaError> {
let user = self.db.get_user(username)
.map_err(|_| QuotaError::UserNotFound)?
.ok_or(QuotaError::UserNotFound)?;
if !user.enabled {
return Err(QuotaError::UserDisabled);
}
// Daily
let daily_limit = if user.daily_quota > 0 { user.daily_quota as u64 } else { self.default_daily };
if daily_limit > 0 {
let (tx, rx) = self.db.get_daily_usage(username).unwrap_or((0, 0));
let used = tx + rx;
if used >= daily_limit {
return Err(QuotaError::DailyExceeded { used, limit: daily_limit });
}
}
// Weekly
let weekly_limit = if user.weekly_quota > 0 { user.weekly_quota as u64 } else { self.default_weekly };
if weekly_limit > 0 {
let (tx, rx) = self.db.get_weekly_usage(username).unwrap_or((0, 0));
let used = tx + rx;
if used >= weekly_limit {
return Err(QuotaError::WeeklyExceeded { used, limit: weekly_limit });
}
}
// Monthly
if self.default_monthly > 0 {
let (tx, rx) = self.db.get_monthly_usage(username).unwrap_or((0, 0));
let used = tx + rx;
if used >= self.default_monthly {
return Err(QuotaError::MonthlyExceeded { used, limit: self.default_monthly });
}
}
Ok(())
}
/// Check if an IP is allowed to connect, considering both combined and
/// directional bandwidth quotas.
///
/// The `direction` parameter indicates which direction the test will use.
/// For `Direction::Both`, both inbound and outbound directional limits are
/// checked. Combined (total) limits are always checked regardless of
/// direction.
pub fn check_ip(&self, ip: &IpAddr, direction: Direction) -> Result<(), QuotaError> {
// Connection limit
if self.max_conn_per_ip > 0 {
let conns = self.active_connections.lock().unwrap();
let current = conns.get(ip).copied().unwrap_or(0);
if current >= self.max_conn_per_ip {
return Err(QuotaError::TooManyConnections {
current,
limit: self.max_conn_per_ip,
});
}
}
let ip_str = ip.to_string();
// --- Combined (inbound + outbound) limits ---
self.check_ip_combined(&ip_str)?;
// --- Directional limits ---
let check_inbound = matches!(direction, Direction::Inbound | Direction::Both);
let check_outbound = matches!(direction, Direction::Outbound | Direction::Both);
if check_inbound {
self.check_ip_inbound(&ip_str)?;
}
if check_outbound {
self.check_ip_outbound(&ip_str)?;
}
Ok(())
}
/// Check combined (total inbound + outbound) IP limits.
fn check_ip_combined(&self, ip_str: &str) -> Result<(), QuotaError> {
// IP daily (combined)
if self.ip_daily > 0 {
let (tx, rx) = self.db.get_ip_daily_usage(ip_str).unwrap_or((0, 0));
let used = tx + rx;
if used >= self.ip_daily {
return Err(QuotaError::IpDailyExceeded { used, limit: self.ip_daily });
}
}
// IP weekly (combined)
if self.ip_weekly > 0 {
let (tx, rx) = self.db.get_ip_weekly_usage(ip_str).unwrap_or((0, 0));
let used = tx + rx;
if used >= self.ip_weekly {
return Err(QuotaError::IpWeeklyExceeded { used, limit: self.ip_weekly });
}
}
// IP monthly (combined)
if self.ip_monthly > 0 {
let (tx, rx) = self.db.get_ip_monthly_usage(ip_str).unwrap_or((0, 0));
let used = tx + rx;
if used >= self.ip_monthly {
return Err(QuotaError::IpMonthlyExceeded { used, limit: self.ip_monthly });
}
}
Ok(())
}
/// Check inbound-only (client sends to us) IP limits.
fn check_ip_inbound(&self, ip_str: &str) -> Result<(), QuotaError> {
// Daily inbound
if self.ip_daily_inbound > 0 {
let used = self.db.get_ip_daily_inbound(ip_str).unwrap_or(0);
if used >= self.ip_daily_inbound {
return Err(QuotaError::IpInboundDailyExceeded {
used,
limit: self.ip_daily_inbound,
});
}
}
// Weekly inbound
if self.ip_weekly_inbound > 0 {
let used = self.db.get_ip_weekly_inbound(ip_str).unwrap_or(0);
if used >= self.ip_weekly_inbound {
return Err(QuotaError::IpInboundWeeklyExceeded {
used,
limit: self.ip_weekly_inbound,
});
}
}
// Monthly inbound
if self.ip_monthly_inbound > 0 {
let used = self.db.get_ip_monthly_inbound(ip_str).unwrap_or(0);
if used >= self.ip_monthly_inbound {
return Err(QuotaError::IpInboundMonthlyExceeded {
used,
limit: self.ip_monthly_inbound,
});
}
}
Ok(())
}
/// Check outbound-only (we send to client) IP limits.
fn check_ip_outbound(&self, ip_str: &str) -> Result<(), QuotaError> {
// Daily outbound
if self.ip_daily_outbound > 0 {
let used = self.db.get_ip_daily_outbound(ip_str).unwrap_or(0);
if used >= self.ip_daily_outbound {
return Err(QuotaError::IpOutboundDailyExceeded {
used,
limit: self.ip_daily_outbound,
});
}
}
// Weekly outbound
if self.ip_weekly_outbound > 0 {
let used = self.db.get_ip_weekly_outbound(ip_str).unwrap_or(0);
if used >= self.ip_weekly_outbound {
return Err(QuotaError::IpOutboundWeeklyExceeded {
used,
limit: self.ip_weekly_outbound,
});
}
}
// Monthly outbound
if self.ip_monthly_outbound > 0 {
let used = self.db.get_ip_monthly_outbound(ip_str).unwrap_or(0);
if used >= self.ip_monthly_outbound {
return Err(QuotaError::IpOutboundMonthlyExceeded {
used,
limit: self.ip_monthly_outbound,
});
}
}
Ok(())
}
pub fn connect(&self, ip: &IpAddr) {
let mut conns = self.active_connections.lock().unwrap();
*conns.entry(*ip).or_insert(0) += 1;
}
pub fn disconnect(&self, ip: &IpAddr) {
let mut conns = self.active_connections.lock().unwrap();
if let Some(count) = conns.get_mut(ip) {
*count = count.saturating_sub(1);
if *count == 0 {
conns.remove(ip);
}
}
}
/// Record usage after a test completes (both user and IP), with separate
/// inbound and outbound byte counts.
///
/// - `inbound_bytes`: bytes the client sent to us (server RX).
/// - `outbound_bytes`: bytes we sent to the client (server TX).
///
/// Both the combined user/IP usage and directional IP usage are recorded.
pub fn record_usage(
&self,
username: &str,
ip: &str,
inbound_bytes: u64,
outbound_bytes: u64,
) {
// Record combined user usage (tx/rx from the server's perspective:
// tx = outbound, rx = inbound).
if let Err(e) = self.db.record_usage(username, outbound_bytes, inbound_bytes) {
tracing::error!("Failed to record user usage for {}: {}", username, e);
}
// Record IP usage — record_ip_usage already writes both the
// inbound_bytes and outbound_bytes columns in one operation.
// Do NOT also call record_ip_inbound_usage/record_ip_outbound_usage
// as they update the same columns and would double-count.
if let Err(e) = self.db.record_ip_usage(ip, outbound_bytes, inbound_bytes) {
tracing::error!("Failed to record IP usage for {}: {}", ip, e);
}
}
/// Calculate the remaining byte budget for a user+IP combination.
/// Returns the minimum remaining quota across all applicable limits.
/// Used to set `BandwidthState::byte_budget` before a test starts,
/// preventing overshoot beyond quota boundaries.
pub fn remaining_budget(&self, username: &str, ip: &IpAddr) -> u64 {
let mut budget = u64::MAX;
let ip_str = ip.to_string();
// Helper: min that ignores 0 (unlimited)
let cap = |budget: &mut u64, limit: u64, used: u64| {
if limit > 0 {
let remaining = limit.saturating_sub(used);
*budget = (*budget).min(remaining);
}
};
// User quotas (combined tx+rx)
if let Ok(Some(user)) = self.db.get_user(username) {
let daily_limit = if user.daily_quota > 0 { user.daily_quota as u64 } else { self.default_daily };
if daily_limit > 0 {
let (tx, rx) = self.db.get_daily_usage(username).unwrap_or((0, 0));
cap(&mut budget, daily_limit, tx + rx);
}
let weekly_limit = if user.weekly_quota > 0 { user.weekly_quota as u64 } else { self.default_weekly };
if weekly_limit > 0 {
let (tx, rx) = self.db.get_weekly_usage(username).unwrap_or((0, 0));
cap(&mut budget, weekly_limit, tx + rx);
}
if self.default_monthly > 0 {
let (tx, rx) = self.db.get_monthly_usage(username).unwrap_or((0, 0));
cap(&mut budget, self.default_monthly, tx + rx);
}
}
// IP combined quotas
if self.ip_daily > 0 {
let (tx, rx) = self.db.get_ip_daily_usage(&ip_str).unwrap_or((0, 0));
cap(&mut budget, self.ip_daily, tx + rx);
}
if self.ip_weekly > 0 {
let (tx, rx) = self.db.get_ip_weekly_usage(&ip_str).unwrap_or((0, 0));
cap(&mut budget, self.ip_weekly, tx + rx);
}
if self.ip_monthly > 0 {
let (tx, rx) = self.db.get_ip_monthly_usage(&ip_str).unwrap_or((0, 0));
cap(&mut budget, self.ip_monthly, tx + rx);
}
// IP directional quotas — use inbound + outbound as combined ceiling
if self.ip_daily_inbound > 0 {
let used = self.db.get_ip_daily_inbound(&ip_str).unwrap_or(0);
cap(&mut budget, self.ip_daily_inbound, used);
}
if self.ip_daily_outbound > 0 {
let used = self.db.get_ip_daily_outbound(&ip_str).unwrap_or(0);
cap(&mut budget, self.ip_daily_outbound, used);
}
if self.ip_weekly_inbound > 0 {
let used = self.db.get_ip_weekly_inbound(&ip_str).unwrap_or(0);
cap(&mut budget, self.ip_weekly_inbound, used);
}
if self.ip_weekly_outbound > 0 {
let used = self.db.get_ip_weekly_outbound(&ip_str).unwrap_or(0);
cap(&mut budget, self.ip_weekly_outbound, used);
}
if self.ip_monthly_inbound > 0 {
let used = self.db.get_ip_monthly_inbound(&ip_str).unwrap_or(0);
cap(&mut budget, self.ip_monthly_inbound, used);
}
if self.ip_monthly_outbound > 0 {
let used = self.db.get_ip_monthly_outbound(&ip_str).unwrap_or(0);
cap(&mut budget, self.ip_monthly_outbound, used);
}
budget
}
pub fn max_duration(&self) -> u64 {
self.max_duration
}
pub fn active_connections_count(&self, ip: &IpAddr) -> u32 {
let conns = self.active_connections.lock().unwrap();
conns.get(ip).copied().unwrap_or(0)
}
}

View File

@@ -0,0 +1,449 @@
//! Enhanced server loop with quota enforcement.
//!
//! Wraps the standard btest server connection handler with:
//! - Pre-connection IP/user quota checks
//! - MD5 challenge-response auth against user DB
//! - TCP multi-connection session support
//! - Mid-session quota enforcement via QuotaEnforcer
//! - Post-session usage recording
use std::collections::HashMap;
use std::net::SocketAddr;
use std::sync::Arc;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::{TcpListener, TcpStream};
use tokio::sync::Mutex;
use btest_rs::protocol::*;
use btest_rs::bandwidth::BandwidthState;
use super::enforcer::{QuotaEnforcer, StopReason};
use super::quota::{Direction, QuotaManager};
use super::user_db::UserDb;
/// Pending TCP multi-connection session.
struct TcpSession {
peer_ip: std::net::IpAddr,
username: String,
cmd: Command,
streams: Vec<TcpStream>,
expected: u8,
}
type SessionMap = Arc<Mutex<HashMap<u16, TcpSession>>>;
/// Run the pro server with quota enforcement.
pub async fn run_pro_server(
port: u16,
_ecsrp5: bool,
listen_v4: Option<String>,
listen_v6: Option<String>,
db: UserDb,
quota_mgr: QuotaManager,
quota_check_interval: u64,
) -> anyhow::Result<()> {
let v4_listener = if let Some(ref addr) = listen_v4 {
let bind_addr = format!("{}:{}", addr, port);
Some(TcpListener::bind(&bind_addr).await?)
} else {
None
};
let v6_listener = if let Some(ref addr) = listen_v6 {
let bind_addr = format!("[{}]:{}", addr, port);
Some(TcpListener::bind(&bind_addr).await?)
} else {
None
};
if v4_listener.is_none() && v6_listener.is_none() {
anyhow::bail!("No listeners bound");
}
let sessions: SessionMap = Arc::new(Mutex::new(HashMap::new()));
tracing::info!("btest-server-pro ready, accepting connections");
loop {
let (stream, peer) = match (&v4_listener, &v6_listener) {
(Some(v4), Some(v6)) => {
tokio::select! {
r = v4.accept() => r?,
r = v6.accept() => r?,
}
}
(Some(v4), None) => v4.accept().await?,
(None, Some(v6)) => v6.accept().await?,
_ => unreachable!(),
};
tracing::info!("New connection from {}", peer);
let db = db.clone();
let qm = quota_mgr.clone();
let interval = quota_check_interval;
let sess = sessions.clone();
tokio::spawn(async move {
let is_primary = match handle_pro_connection(stream, peer, db, qm.clone(), interval, sess).await {
Ok(Some((username, stop_reason, tx, rx))) => {
tracing::info!(
"Client {} (user '{}') finished: {} (tx={}, rx={})",
peer, username, stop_reason, tx, rx,
);
btest_rs::syslog_logger::test_end(
&peer.to_string(), "btest", &format!("{}", stop_reason),
tx, rx, 0, 0,
);
true
}
Ok(None) => false, // secondary connection or pending multi-conn
Err(e) => {
tracing::error!("Client {} error: {}", peer, e);
true
}
};
// Only decrement connection count for primary connections
if is_primary {
qm.disconnect(&peer.ip());
}
});
}
}
/// Handle a single TCP connection. Returns None for secondary multi-conn joins.
async fn handle_pro_connection(
mut stream: TcpStream,
peer: SocketAddr,
db: UserDb,
quota_mgr: QuotaManager,
quota_check_interval: u64,
sessions: SessionMap,
) -> anyhow::Result<Option<(String, StopReason, u64, u64)>> {
stream.set_nodelay(true)?;
// HELLO
stream.write_all(&HELLO).await?;
// Read command (or session token for secondary connections)
let mut cmd_buf = [0u8; 16];
stream.read_exact(&mut cmd_buf).await?;
// Check if this is a secondary connection joining an existing TCP session
// Secondary connections send [HI, LO, ...] matching an existing session token
{
let potential_token = u16::from_be_bytes([cmd_buf[0], cmd_buf[1]]);
let mut map = sessions.lock().await;
if let Some(session) = map.get_mut(&potential_token) {
if session.peer_ip == peer.ip()
&& session.streams.len() < session.expected as usize
{
tracing::info!(
"Secondary connection from {} joining session (token={:04x}, {}/{})",
peer, potential_token,
session.streams.len() + 1, session.expected,
);
// Auth the secondary connection with same token response
let ok = [0x01, cmd_buf[0], cmd_buf[1], 0x00];
stream.write_all(&ok).await?;
stream.flush().await?;
session.streams.push(stream);
// If all connections have joined, start the test
if session.streams.len() >= session.expected as usize {
let session = map.remove(&potential_token).unwrap();
let db2 = db.clone();
let qm2 = quota_mgr.clone();
tokio::spawn(async move {
match run_pro_multiconn_test(
session.streams, session.cmd, peer,
&session.username, db2, qm2, quota_check_interval,
).await {
Ok((stop, tx, rx)) => {
tracing::info!(
"Multi-conn {} (user '{}') finished: {} (tx={}, rx={})",
peer, session.username, stop, tx, rx,
);
}
Err(e) => {
tracing::error!("Multi-conn {} error: {}", peer, e);
}
}
});
}
return Ok(None);
}
}
}
// Primary connection — check IP quota/connection limit now
if let Err(e) = quota_mgr.check_ip(&peer.ip(), Direction::Both) {
tracing::warn!("Rejected {} — {}", peer, e);
btest_rs::syslog_logger::auth_failure(
&peer.to_string(), "-", "-", &format!("{}", e),
);
return Ok(None);
}
quota_mgr.connect(&peer.ip());
let cmd = Command::deserialize(&cmd_buf);
tracing::info!(
"Client {} command: proto={} dir={} conn_count={} tx_size={}",
peer,
if cmd.is_udp() { "UDP" } else { "TCP" },
match cmd.direction { CMD_DIR_RX => "RX", CMD_DIR_TX => "TX", _ => "BOTH" },
cmd.tcp_conn_count,
cmd.tx_size,
);
// Build auth OK response with session token for multi-connection
let is_tcp_multi = !cmd.is_udp() && cmd.tcp_conn_count > 0;
let session_token: u16 = if is_tcp_multi {
rand::random::<u16>() | 0x0101 // ensure both bytes non-zero
} else {
0
};
let ok_response: [u8; 4] = if is_tcp_multi {
[0x01, (session_token >> 8) as u8, (session_token & 0xFF) as u8, 0x00]
} else {
AUTH_OK
};
// Authenticate — MD5 challenge-response against DB
stream.write_all(&AUTH_REQUIRED).await?;
let challenge = btest_rs::auth::generate_challenge();
stream.write_all(&challenge).await?;
stream.flush().await?;
let mut response = [0u8; 48];
stream.read_exact(&mut response).await?;
let received_hash = &response[0..16];
let received_user = &response[16..48];
let user_end = received_user.iter().position(|&b| b == 0).unwrap_or(32);
let username = std::str::from_utf8(&received_user[..user_end])
.unwrap_or("")
.to_string();
// Verify against DB
let user = db.get_user(&username)?;
match user {
None => {
tracing::warn!("Auth failed: user '{}' not found", username);
stream.write_all(&AUTH_FAILED).await?;
btest_rs::syslog_logger::auth_failure(
&peer.to_string(), &username, "md5", "user not found",
);
anyhow::bail!("User not found");
}
Some(u) => {
if !u.enabled {
tracing::warn!("Auth failed: user '{}' is disabled", username);
stream.write_all(&AUTH_FAILED).await?;
btest_rs::syslog_logger::auth_failure(
&peer.to_string(), &username, "md5", "user disabled",
);
anyhow::bail!("User disabled");
}
// Verify MD5 hash against stored raw password
if let Ok(Some(raw_pass)) = db.get_password(&username) {
let expected_hash = btest_rs::auth::compute_auth_hash(&raw_pass, &challenge);
if received_hash != expected_hash {
tracing::warn!("Auth failed: password mismatch for user '{}'", username);
stream.write_all(&AUTH_FAILED).await?;
btest_rs::syslog_logger::auth_failure(
&peer.to_string(), &username, "md5", "password mismatch",
);
anyhow::bail!("Auth failed");
}
}
// If no raw password stored, accept (backwards compat with old DB entries)
stream.write_all(&ok_response).await?;
stream.flush().await?;
tracing::info!("Auth successful for user '{}'", username);
btest_rs::syslog_logger::auth_success(
&peer.to_string(), &username, "md5",
);
}
}
// Check user quota before starting test
if let Err(e) = quota_mgr.check_user(&username) {
tracing::warn!("Quota check failed for '{}': {}", username, e);
btest_rs::syslog_logger::auth_failure(
&peer.to_string(), &username, "quota", &format!("{}", e),
);
return Ok(Some((username, StopReason::UserDailyQuota, 0, 0)));
}
// TCP multi-connection: register session and wait for secondary connections
if is_tcp_multi {
tracing::info!(
"TCP multi-connection: waiting for {} connections (token={:04x})",
cmd.tcp_conn_count, session_token,
);
let mut map = sessions.lock().await;
map.insert(session_token, TcpSession {
peer_ip: peer.ip(),
username: username.clone(),
cmd: cmd.clone(),
streams: vec![stream],
expected: cmd.tcp_conn_count, // tcp_conn_count includes the primary
});
// The test will be started when all connections join (in the secondary handler above)
return Ok(None);
}
// Single-connection test
run_pro_single_test(stream, cmd, peer, &username, db, quota_mgr, quota_check_interval).await
.map(|(stop, tx, rx)| Some((username, stop, tx, rx)))
}
/// Run a single-connection bandwidth test with quota enforcement.
async fn run_pro_single_test(
stream: TcpStream,
cmd: Command,
peer: SocketAddr,
username: &str,
db: UserDb,
quota_mgr: QuotaManager,
quota_check_interval: u64,
) -> anyhow::Result<(StopReason, u64, u64)> {
let proto_str = if cmd.is_udp() { "UDP" } else { "TCP" };
let dir_str = match cmd.direction {
CMD_DIR_RX => "RX", CMD_DIR_TX => "TX", _ => "BOTH"
};
let session_id = db.start_session(
username, &peer.ip().to_string(), proto_str, dir_str,
)?;
btest_rs::syslog_logger::test_start(
&peer.to_string(), proto_str, dir_str, cmd.tcp_conn_count,
);
let state = BandwidthState::new();
// Set byte budget
let budget = quota_mgr.remaining_budget(username, &peer.ip());
if budget < u64::MAX {
state.set_budget(budget);
tracing::info!("Byte budget for '{}' from {}: {} bytes", username, peer.ip(), budget);
}
let enforcer = QuotaEnforcer::new(
quota_mgr.clone(),
username.to_string(),
peer.ip(),
state.clone(),
quota_check_interval,
quota_mgr.max_duration(),
);
let enforcer_state = state.clone();
let enforcer_handle = tokio::spawn(async move {
enforcer.run().await
});
static UDP_PORT_OFFSET: std::sync::atomic::AtomicU16 = std::sync::atomic::AtomicU16::new(0);
let mut stream_mut = stream;
let test_result = if cmd.is_udp() {
let offset = UDP_PORT_OFFSET.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
let udp_port = btest_rs::protocol::BTEST_UDP_PORT_START + offset;
btest_rs::server::run_udp_test(
&mut stream_mut, peer, &cmd, state.clone(), udp_port,
).await
} else {
btest_rs::server::run_tcp_test(stream_mut, cmd.clone(), state.clone()).await
};
enforcer_state.running.store(false, std::sync::atomic::Ordering::SeqCst);
let stop_reason = enforcer_handle.await.unwrap_or(StopReason::ClientDisconnected);
let final_reason = match &test_result {
Ok(_) => {
if stop_reason == StopReason::ClientDisconnected {
StopReason::ClientDisconnected
} else {
stop_reason
}
}
Err(_) => StopReason::ClientDisconnected,
};
let (total_tx, total_rx, _, _) = state.summary();
quota_mgr.record_usage(username, &peer.ip().to_string(), total_tx, total_rx);
db.end_session(session_id, total_tx, total_rx)?;
Ok((final_reason, total_tx, total_rx))
}
/// Run a TCP multi-connection test with all streams collected.
/// Delegates to the standard multi-conn handler which correctly manages
/// TX+status injection for bidirectional mode.
async fn run_pro_multiconn_test(
streams: Vec<TcpStream>,
cmd: Command,
peer: SocketAddr,
username: &str,
db: UserDb,
quota_mgr: QuotaManager,
quota_check_interval: u64,
) -> anyhow::Result<(StopReason, u64, u64)> {
let dir_str = match cmd.direction {
CMD_DIR_RX => "RX", CMD_DIR_TX => "TX", _ => "BOTH"
};
let session_id = db.start_session(
username, &peer.ip().to_string(), "TCP", dir_str,
)?;
tracing::info!(
"Starting TCP multi-conn test: {} streams, dir={}",
streams.len(), dir_str,
);
let state = BandwidthState::new();
let budget = quota_mgr.remaining_budget(username, &peer.ip());
if budget < u64::MAX {
state.set_budget(budget);
}
let enforcer = QuotaEnforcer::new(
quota_mgr.clone(),
username.to_string(),
peer.ip(),
state.clone(),
quota_check_interval,
quota_mgr.max_duration(),
);
let enforcer_state = state.clone();
let enforcer_handle = tokio::spawn(async move {
enforcer.run().await
});
// Use the standard multi-connection handler which correctly handles
// all direction modes (TX, RX, BOTH with status injection)
let _test_result = btest_rs::server::run_tcp_multiconn_test(
streams, cmd, state.clone(),
).await;
enforcer_state.running.store(false, std::sync::atomic::Ordering::SeqCst);
let stop_reason = enforcer_handle.await.unwrap_or(StopReason::ClientDisconnected);
let (total_tx, total_rx, _, _) = state.summary();
quota_mgr.record_usage(username, &peer.ip().to_string(), total_tx, total_rx);
db.end_session(session_id, total_tx, total_rx)?;
Ok((stop_reason, total_tx, total_rx))
}

641
src/server_pro/user_db.rs Normal file
View File

@@ -0,0 +1,641 @@
//! SQLite-based user database for btest-server-pro.
//!
//! Stores users with credentials, quotas, and usage tracking.
use rusqlite::{Connection, params};
use std::sync::{Arc, Mutex};
#[derive(Clone)]
pub struct UserDb {
conn: Arc<Mutex<Connection>>,
path: Arc<String>,
}
#[derive(Debug, Clone)]
pub struct User {
pub id: i64,
pub username: String,
pub password_hash: String, // stored as hex of SHA256(username:password)
pub daily_quota: i64, // 0 = use default
pub weekly_quota: i64, // 0 = use default
pub enabled: bool,
}
#[derive(Debug)]
pub struct UsageRecord {
pub username: String,
pub date: String, // YYYY-MM-DD
pub tx_bytes: u64,
pub rx_bytes: u64,
pub test_count: u32,
}
/// Per-second bandwidth interval data for graphing.
#[derive(Debug, Clone)]
pub struct IntervalData {
pub interval_num: i32,
pub tx_mbps: f64,
pub rx_mbps: f64,
pub local_cpu: i32,
pub remote_cpu: i32,
pub lost: i64,
}
/// Summary of a single test session.
#[derive(Debug, Clone)]
pub struct SessionSummary {
pub id: i64,
pub started_at: String,
pub ended_at: Option<String>,
pub protocol: String,
pub direction: String,
pub tx_bytes: u64,
pub rx_bytes: u64,
}
/// Aggregate statistics for an IP address.
#[derive(Debug, Clone)]
pub struct IpStats {
pub total_tests: u64,
pub total_inbound: u64,
pub total_outbound: u64,
pub avg_tx_mbps: f64,
pub avg_rx_mbps: f64,
}
impl UserDb {
pub fn open(path: &str) -> anyhow::Result<Self> {
let conn = Connection::open(path)?;
conn.execute_batch("PRAGMA journal_mode=WAL; PRAGMA busy_timeout=5000;")?;
Ok(Self {
conn: Arc::new(Mutex::new(conn)),
path: Arc::new(path.to_string()),
})
}
/// Return the database file path.
pub fn path(&self) -> &str {
&self.path
}
pub fn ensure_tables(&self) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
conn.execute_batch("
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
daily_quota INTEGER DEFAULT 0,
weekly_quota INTEGER DEFAULT 0,
enabled INTEGER DEFAULT 1,
created_at TEXT DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS usage (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT NOT NULL,
date TEXT NOT NULL,
tx_bytes INTEGER DEFAULT 0,
rx_bytes INTEGER DEFAULT 0,
test_count INTEGER DEFAULT 0,
UNIQUE(username, date)
);
CREATE TABLE IF NOT EXISTS ip_usage (
id INTEGER PRIMARY KEY AUTOINCREMENT,
ip TEXT NOT NULL,
date TEXT NOT NULL,
inbound_bytes INTEGER DEFAULT 0,
outbound_bytes INTEGER DEFAULT 0,
test_count INTEGER DEFAULT 0,
UNIQUE(ip, date)
);
CREATE TABLE IF NOT EXISTS sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT NOT NULL,
peer_ip TEXT NOT NULL,
started_at TEXT DEFAULT (datetime('now')),
ended_at TEXT,
tx_bytes INTEGER DEFAULT 0,
rx_bytes INTEGER DEFAULT 0,
protocol TEXT,
direction TEXT
);
CREATE TABLE IF NOT EXISTS test_intervals (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id INTEGER NOT NULL,
interval_num INTEGER NOT NULL,
tx_bytes INTEGER DEFAULT 0,
rx_bytes INTEGER DEFAULT 0,
tx_mbps REAL DEFAULT 0,
rx_mbps REAL DEFAULT 0,
local_cpu INTEGER DEFAULT 0,
remote_cpu INTEGER DEFAULT 0,
lost_packets INTEGER DEFAULT 0,
FOREIGN KEY(session_id) REFERENCES sessions(id)
);
CREATE INDEX IF NOT EXISTS idx_usage_user_date ON usage(username, date);
CREATE INDEX IF NOT EXISTS idx_ip_usage_date ON ip_usage(ip, date);
CREATE INDEX IF NOT EXISTS idx_sessions_peer ON sessions(peer_ip, started_at);
CREATE INDEX IF NOT EXISTS idx_intervals_session ON test_intervals(session_id);
")?;
Ok(())
}
pub fn user_count(&self) -> anyhow::Result<u64> {
let conn = self.conn.lock().unwrap();
let count: i64 = conn.query_row("SELECT COUNT(*) FROM users", [], |r| r.get(0))?;
Ok(count as u64)
}
pub fn add_user(&self, username: &str, password: &str) -> anyhow::Result<()> {
let hash = hash_password(username, password);
let conn = self.conn.lock().unwrap();
// Ensure password_raw column exists (migration for older databases)
let _ = conn.execute("ALTER TABLE users ADD COLUMN password_raw TEXT DEFAULT ''", []);
conn.execute(
"INSERT OR REPLACE INTO users (username, password_hash, password_raw) VALUES (?1, ?2, ?3)",
params![username, hash, password],
)?;
Ok(())
}
/// Get the raw password for MD5 challenge-response auth.
pub fn get_password(&self, username: &str) -> anyhow::Result<Option<String>> {
let conn = self.conn.lock().unwrap();
let result = conn.query_row(
"SELECT password_raw FROM users WHERE username = ?1 AND enabled = 1",
params![username],
|row| row.get::<_, String>(0),
).optional()?;
Ok(result)
}
pub fn get_user(&self, username: &str) -> anyhow::Result<Option<User>> {
let conn = self.conn.lock().unwrap();
let mut stmt = conn.prepare(
"SELECT id, username, password_hash, daily_quota, weekly_quota, enabled FROM users WHERE username = ?1"
)?;
let user = stmt.query_row(params![username], |row| {
Ok(User {
id: row.get(0)?,
username: row.get(1)?,
password_hash: row.get(2)?,
daily_quota: row.get(3)?,
weekly_quota: row.get(4)?,
enabled: row.get::<_, i32>(5)? != 0,
})
}).optional()?;
Ok(user)
}
pub fn verify_password(&self, username: &str, password: &str) -> anyhow::Result<bool> {
let expected = hash_password(username, password);
match self.get_user(username)? {
Some(user) => Ok(user.enabled && user.password_hash == expected),
None => Ok(false),
}
}
pub fn record_usage(&self, username: &str, tx_bytes: u64, rx_bytes: u64) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
conn.execute(
"INSERT INTO usage (username, date, tx_bytes, rx_bytes, test_count)
VALUES (?1, ?2, ?3, ?4, 1)
ON CONFLICT(username, date) DO UPDATE SET
tx_bytes = tx_bytes + ?3,
rx_bytes = rx_bytes + ?4,
test_count = test_count + 1",
params![username, today, tx_bytes as i64, rx_bytes as i64],
)?;
Ok(())
}
pub fn get_daily_usage(&self, username: &str) -> anyhow::Result<(u64, u64)> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
let result = conn.query_row(
"SELECT COALESCE(SUM(tx_bytes),0), COALESCE(SUM(rx_bytes),0) FROM usage WHERE username = ?1 AND date = ?2",
params![username, today],
|row| {
let a: i64 = row.get(0)?;
let b: i64 = row.get(1)?;
Ok((a as u64, b as u64))
},
)?;
Ok(result)
}
pub fn get_weekly_usage(&self, username: &str) -> anyhow::Result<(u64, u64)> {
let conn = self.conn.lock().unwrap();
let result = conn.query_row(
"SELECT COALESCE(SUM(tx_bytes),0), COALESCE(SUM(rx_bytes),0) FROM usage
WHERE username = ?1 AND date >= date('now', '-7 days')",
params![username],
|row| {
let a: i64 = row.get(0)?;
let b: i64 = row.get(1)?;
Ok((a as u64, b as u64))
},
)?;
Ok(result)
}
pub fn get_monthly_usage(&self, username: &str) -> anyhow::Result<(u64, u64)> {
let conn = self.conn.lock().unwrap();
let result = conn.query_row(
"SELECT COALESCE(SUM(tx_bytes),0), COALESCE(SUM(rx_bytes),0) FROM usage
WHERE username = ?1 AND date >= date('now', '-30 days')",
params![username],
|row| {
let a: i64 = row.get(0)?;
let b: i64 = row.get(1)?;
Ok((a as u64, b as u64))
},
)?;
Ok(result)
}
// --- Per-IP usage tracking ---
pub fn record_ip_usage(&self, ip: &str, tx_bytes: u64, rx_bytes: u64) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
// From the server's perspective: inbound = data coming FROM the client (rx),
// outbound = data going TO the client (tx).
let inbound = rx_bytes;
let outbound = tx_bytes;
conn.execute(
"INSERT INTO ip_usage (ip, date, inbound_bytes, outbound_bytes, test_count)
VALUES (?1, ?2, ?3, ?4, 1)
ON CONFLICT(ip, date) DO UPDATE SET
inbound_bytes = inbound_bytes + ?3,
outbound_bytes = outbound_bytes + ?4,
test_count = test_count + 1",
params![ip, today, inbound as i64, outbound as i64],
)?;
Ok(())
}
pub fn get_ip_daily_usage(&self, ip: &str) -> anyhow::Result<(u64, u64)> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
let result = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes),0), COALESCE(SUM(outbound_bytes),0) FROM ip_usage WHERE ip = ?1 AND date = ?2",
params![ip, today],
|row| {
let inbound: i64 = row.get(0)?;
let outbound: i64 = row.get(1)?;
Ok((inbound as u64, outbound as u64))
},
)?;
Ok(result)
}
pub fn get_ip_weekly_usage(&self, ip: &str) -> anyhow::Result<(u64, u64)> {
let conn = self.conn.lock().unwrap();
let result = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes),0), COALESCE(SUM(outbound_bytes),0) FROM ip_usage
WHERE ip = ?1 AND date >= date('now', '-7 days')",
params![ip],
|row| {
let inbound: i64 = row.get(0)?;
let outbound: i64 = row.get(1)?;
Ok((inbound as u64, outbound as u64))
},
)?;
Ok(result)
}
pub fn get_ip_monthly_usage(&self, ip: &str) -> anyhow::Result<(u64, u64)> {
let conn = self.conn.lock().unwrap();
let result = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes),0), COALESCE(SUM(outbound_bytes),0) FROM ip_usage
WHERE ip = ?1 AND date >= date('now', '-30 days')",
params![ip],
|row| {
let inbound: i64 = row.get(0)?;
let outbound: i64 = row.get(1)?;
Ok((inbound as u64, outbound as u64))
},
)?;
Ok(result)
}
// --- Per-IP directional usage (single-column queries) ---
/// Record inbound-only IP usage (data coming FROM the client).
pub fn record_ip_inbound_usage(&self, ip: &str, bytes: u64) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
conn.execute(
"INSERT INTO ip_usage (ip, date, inbound_bytes, test_count)
VALUES (?1, ?2, ?3, 0)
ON CONFLICT(ip, date) DO UPDATE SET
inbound_bytes = inbound_bytes + ?3",
params![ip, today, bytes as i64],
)?;
Ok(())
}
/// Record outbound-only IP usage (data going TO the client).
pub fn record_ip_outbound_usage(&self, ip: &str, bytes: u64) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
conn.execute(
"INSERT INTO ip_usage (ip, date, outbound_bytes, test_count)
VALUES (?1, ?2, ?3, 0)
ON CONFLICT(ip, date) DO UPDATE SET
outbound_bytes = outbound_bytes + ?3",
params![ip, today, bytes as i64],
)?;
Ok(())
}
/// Get daily inbound bytes for an IP.
pub fn get_ip_daily_inbound(&self, ip: &str) -> anyhow::Result<u64> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
let result: i64 = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes),0) FROM ip_usage WHERE ip = ?1 AND date = ?2",
params![ip, today],
|row| row.get(0),
)?;
Ok(result as u64)
}
/// Get weekly inbound bytes for an IP.
pub fn get_ip_weekly_inbound(&self, ip: &str) -> anyhow::Result<u64> {
let conn = self.conn.lock().unwrap();
let result: i64 = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes),0) FROM ip_usage WHERE ip = ?1 AND date >= date('now', '-7 days')",
params![ip],
|row| row.get(0),
)?;
Ok(result as u64)
}
/// Get monthly inbound bytes for an IP.
pub fn get_ip_monthly_inbound(&self, ip: &str) -> anyhow::Result<u64> {
let conn = self.conn.lock().unwrap();
let result: i64 = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes),0) FROM ip_usage WHERE ip = ?1 AND date >= date('now', '-30 days')",
params![ip],
|row| row.get(0),
)?;
Ok(result as u64)
}
/// Get daily outbound bytes for an IP.
pub fn get_ip_daily_outbound(&self, ip: &str) -> anyhow::Result<u64> {
let conn = self.conn.lock().unwrap();
let today = chrono_date_today();
let result: i64 = conn.query_row(
"SELECT COALESCE(SUM(outbound_bytes),0) FROM ip_usage WHERE ip = ?1 AND date = ?2",
params![ip, today],
|row| row.get(0),
)?;
Ok(result as u64)
}
/// Get weekly outbound bytes for an IP.
pub fn get_ip_weekly_outbound(&self, ip: &str) -> anyhow::Result<u64> {
let conn = self.conn.lock().unwrap();
let result: i64 = conn.query_row(
"SELECT COALESCE(SUM(outbound_bytes),0) FROM ip_usage WHERE ip = ?1 AND date >= date('now', '-7 days')",
params![ip],
|row| row.get(0),
)?;
Ok(result as u64)
}
/// Get monthly outbound bytes for an IP.
pub fn get_ip_monthly_outbound(&self, ip: &str) -> anyhow::Result<u64> {
let conn = self.conn.lock().unwrap();
let result: i64 = conn.query_row(
"SELECT COALESCE(SUM(outbound_bytes),0) FROM ip_usage WHERE ip = ?1 AND date >= date('now', '-30 days')",
params![ip],
|row| row.get(0),
)?;
Ok(result as u64)
}
// --- Session tracking ---
pub fn start_session(&self, username: &str, peer_ip: &str, protocol: &str, direction: &str) -> anyhow::Result<i64> {
let conn = self.conn.lock().unwrap();
conn.execute(
"INSERT INTO sessions (username, peer_ip, protocol, direction) VALUES (?1, ?2, ?3, ?4)",
params![username, peer_ip, protocol, direction],
)?;
Ok(conn.last_insert_rowid())
}
pub fn end_session(&self, session_id: i64, tx_bytes: u64, rx_bytes: u64) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
conn.execute(
"UPDATE sessions SET ended_at = datetime('now'), tx_bytes = ?1, rx_bytes = ?2 WHERE id = ?3",
params![tx_bytes as i64, rx_bytes as i64, session_id],
)?;
Ok(())
}
// --- Per-second interval tracking ---
/// Record a single per-second interval data point for a session.
#[allow(clippy::too_many_arguments)]
pub fn record_test_interval(
&self,
session_id: i64,
interval_num: i32,
tx_bytes: u64,
rx_bytes: u64,
tx_mbps: f64,
rx_mbps: f64,
local_cpu: i32,
remote_cpu: i32,
lost: i64,
) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
conn.execute(
"INSERT INTO test_intervals (session_id, interval_num, tx_bytes, rx_bytes, tx_mbps, rx_mbps, local_cpu, remote_cpu, lost_packets)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9)",
params![
session_id,
interval_num,
tx_bytes as i64,
rx_bytes as i64,
tx_mbps,
rx_mbps,
local_cpu,
remote_cpu,
lost,
],
)?;
Ok(())
}
/// Retrieve all interval data points for a given session, ordered by interval number.
pub fn get_session_intervals(&self, session_id: i64) -> anyhow::Result<Vec<IntervalData>> {
let conn = self.conn.lock().unwrap();
let mut stmt = conn.prepare(
"SELECT interval_num, tx_mbps, rx_mbps, local_cpu, remote_cpu, lost_packets
FROM test_intervals WHERE session_id = ?1 ORDER BY interval_num"
)?;
let rows = stmt.query_map(params![session_id], |row| {
Ok(IntervalData {
interval_num: row.get(0)?,
tx_mbps: row.get(1)?,
rx_mbps: row.get(2)?,
local_cpu: row.get(3)?,
remote_cpu: row.get(4)?,
lost: row.get(5)?,
})
})?.filter_map(|r| r.ok()).collect();
Ok(rows)
}
/// Return the last N sessions for a given IP address, most recent first.
pub fn get_ip_sessions(&self, ip: &str, limit: u32) -> anyhow::Result<Vec<SessionSummary>> {
let conn = self.conn.lock().unwrap();
let mut stmt = conn.prepare(
"SELECT id, started_at, ended_at, protocol, direction, tx_bytes, rx_bytes
FROM sessions WHERE peer_ip = ?1 ORDER BY started_at DESC LIMIT ?2"
)?;
let rows = stmt.query_map(params![ip, limit], |row| {
Ok(SessionSummary {
id: row.get(0)?,
started_at: row.get(1)?,
ended_at: row.get(2)?,
protocol: row.get::<_, Option<String>>(3)?.unwrap_or_default(),
direction: row.get::<_, Option<String>>(4)?.unwrap_or_default(),
tx_bytes: row.get::<_, i64>(5).map(|v| v as u64)?,
rx_bytes: row.get::<_, i64>(6).map(|v| v as u64)?,
})
})?.filter_map(|r| r.ok()).collect();
Ok(rows)
}
/// Return aggregate statistics for an IP address across all sessions.
pub fn get_ip_stats(&self, ip: &str) -> anyhow::Result<IpStats> {
let conn = self.conn.lock().unwrap();
let result = conn.query_row(
"SELECT
COUNT(*) as total_tests,
COALESCE(SUM(inbound_bytes), 0) as total_inbound,
COALESCE(SUM(outbound_bytes), 0) as total_outbound
FROM ip_usage WHERE ip = ?1",
params![ip],
|row| {
let total_tests: i64 = row.get(0)?;
let total_inbound: i64 = row.get(1)?;
let total_outbound: i64 = row.get(2)?;
Ok((total_tests as u64, total_inbound as u64, total_outbound as u64))
},
)?;
// Compute average Mbps from test_intervals joined through sessions
let (avg_tx, avg_rx) = conn.query_row(
"SELECT
COALESCE(AVG(ti.tx_mbps), 0.0),
COALESCE(AVG(ti.rx_mbps), 0.0)
FROM test_intervals ti
INNER JOIN sessions s ON ti.session_id = s.id
WHERE s.peer_ip = ?1",
params![ip],
|row| {
let avg_tx: f64 = row.get(0)?;
let avg_rx: f64 = row.get(1)?;
Ok((avg_tx, avg_rx))
},
)?;
Ok(IpStats {
total_tests: result.0,
total_inbound: result.1,
total_outbound: result.2,
avg_tx_mbps: avg_tx,
avg_rx_mbps: avg_rx,
})
}
pub fn delete_user(&self, username: &str) -> anyhow::Result<bool> {
let conn = self.conn.lock().unwrap();
let rows = conn.execute("DELETE FROM users WHERE username = ?1", params![username])?;
Ok(rows > 0)
}
pub fn set_user_enabled(&self, username: &str, enabled: bool) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
conn.execute(
"UPDATE users SET enabled = ?1 WHERE username = ?2",
params![enabled as i32, username],
)?;
Ok(())
}
pub fn set_user_quota(&self, username: &str, daily: i64, weekly: i64, monthly: i64) -> anyhow::Result<()> {
let conn = self.conn.lock().unwrap();
conn.execute(
"UPDATE users SET daily_quota = ?1, weekly_quota = ?2 WHERE username = ?3",
params![daily, weekly, username],
)?;
Ok(())
}
pub fn list_users(&self) -> anyhow::Result<Vec<User>> {
let conn = self.conn.lock().unwrap();
let mut stmt = conn.prepare(
"SELECT id, username, password_hash, daily_quota, weekly_quota, enabled FROM users ORDER BY username"
)?;
let users = stmt.query_map([], |row| {
Ok(User {
id: row.get(0)?,
username: row.get(1)?,
password_hash: row.get(2)?,
daily_quota: row.get(3)?,
weekly_quota: row.get(4)?,
enabled: row.get::<_, i32>(5)? != 0,
})
})?.filter_map(|r| r.ok()).collect();
Ok(users)
}
}
fn hash_password(username: &str, password: &str) -> String {
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update(format!("{}:{}", username, password).as_bytes());
let result = hasher.finalize();
result.iter().map(|b| format!("{:02x}", b)).collect()
}
fn chrono_date_today() -> String {
// Simple date without chrono crate
use std::time::{SystemTime, UNIX_EPOCH};
let secs = SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_secs();
let days = secs / 86400;
let mut y = 1970u64;
let mut remaining = days;
loop {
let leap = if y % 4 == 0 && (y % 100 != 0 || y % 400 == 0) { 366 } else { 365 };
if remaining < leap { break; }
remaining -= leap;
y += 1;
}
let leap = y % 4 == 0 && (y % 100 != 0 || y % 400 == 0);
let days_in_months = [31u64, if leap { 29 } else { 28 }, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31];
let mut m = 0usize;
for i in 0..12 {
if remaining < days_in_months[i] { m = i; break; }
remaining -= days_in_months[i];
}
format!("{:04}-{:02}-{:02}", y, m + 1, remaining + 1)
}
// Re-export for use by rusqlite
use rusqlite::OptionalExtension;

811
src/server_pro/web/mod.rs Normal file
View File

@@ -0,0 +1,811 @@
//! Web dashboard module for btest-server-pro.
//!
//! Provides an axum-based HTTP dashboard with:
//! - Landing page with IP lookup
//! - Per-IP session history and statistics
//! - Chart.js throughput graphs
//!
//! # Feature gate
//!
//! This entire module is compiled only when the `pro` feature is active
//! (it lives inside the `btest-server-pro` binary crate which already
//! requires `--features pro`).
//!
//! # Template files
//!
//! The HTML source lives in `src/server_pro/web/templates/` as standalone
//! `.html` files for easy editing. The Rust code embeds them via the askama
//! `source` attribute so no `askama.toml` configuration is needed. If you
//! prefer external template files, create `askama.toml` at the crate root:
//!
//! ```toml
//! [[dirs]]
//! path = "src/server_pro/web/templates"
//! ```
//!
//! Then change `source = "..."` to `path = "index.html"` (etc.) in the
//! template structs below.
use std::sync::Arc;
use askama::Template;
use axum::extract::{Path, State};
use axum::http::StatusCode;
use axum::response::{Html, IntoResponse, Response};
use axum::routing::get;
use axum::Router;
use rusqlite::{params, Connection};
use serde::Serialize;
use super::user_db::UserDb;
// ---------------------------------------------------------------------------
// Shared state
// ---------------------------------------------------------------------------
/// Shared application state passed to all handlers via axum's `State`.
pub struct WebState {
/// Reference to the main user/session database.
pub db: UserDb,
/// Separate read-only connection for dashboard queries that are not
/// exposed by [`UserDb`] (e.g. listing sessions, aggregate stats).
/// Wrapped in a [`std::sync::Mutex`] because [`rusqlite::Connection`]
/// is not `Send + Sync` on its own.
pub query_conn: std::sync::Mutex<Connection>,
}
// ---------------------------------------------------------------------------
// Router constructor
// ---------------------------------------------------------------------------
/// Default database filename used when `BTEST_DB_PATH` is not set.
const DEFAULT_DB_PATH: &str = "btest-users.db";
/// Build the axum [`Router`] for the web dashboard.
///
/// The database path for the read-only query connection is resolved in the
/// following order:
///
/// 1. The `BTEST_DB_PATH` environment variable (if set).
/// 2. The compile-time default `btest-users.db`.
///
/// # Panics
///
/// Panics if the read-only database connection or the DDL for the
/// `session_intervals` table cannot be established. This is intentional:
/// the web module is optional and failure during startup should surface
/// loudly rather than silently serving broken pages.
pub fn create_router(db: UserDb) -> Router {
let db_path = db.path().to_string();
let query_conn = Connection::open_with_flags(
&db_path,
rusqlite::OpenFlags::SQLITE_OPEN_READ_ONLY
| rusqlite::OpenFlags::SQLITE_OPEN_NO_MUTEX,
)
.expect("web: failed to open read-only database connection");
query_conn
.execute_batch("PRAGMA busy_timeout=5000;")
.expect("web: failed to set PRAGMA on query connection");
// Ensure the `session_intervals` table exists. The server loop must
// INSERT rows for the chart to have data; the table is created here so
// the schema is ready.
ensure_web_tables(&db_path).expect("web: failed to create session_intervals table");
let state = Arc::new(WebState {
db,
query_conn: std::sync::Mutex::new(query_conn),
});
// axum 0.8 uses `{param}` syntax for path parameters.
Router::new()
.route("/", get(index_page))
.route("/dashboard/{ip}", get(dashboard_page))
.route("/api/ip/{ip}/sessions", get(api_sessions))
.route("/api/ip/{ip}/stats", get(api_stats))
.route("/api/ip/{ip}/export", get(api_export))
.route("/api/ip/{ip}/quota", get(api_quota))
.route("/api/session/{id}/intervals", get(api_intervals))
.with_state(state)
}
/// Create additional tables the web dashboard depends on.
///
/// Opens a short-lived writable connection solely for DDL so it does not
/// interfere with the main [`UserDb`] connection.
fn ensure_web_tables(db_path: &str) -> anyhow::Result<()> {
let conn = Connection::open(db_path)?;
conn.execute_batch("PRAGMA busy_timeout=5000;")?;
conn.execute_batch(
"CREATE TABLE IF NOT EXISTS session_intervals (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id INTEGER NOT NULL,
second INTEGER NOT NULL,
tx_bytes INTEGER NOT NULL DEFAULT 0,
rx_bytes INTEGER NOT NULL DEFAULT 0,
UNIQUE(session_id, second)
);
CREATE INDEX IF NOT EXISTS idx_intervals_session
ON session_intervals(session_id, second);",
)?;
Ok(())
}
// ---------------------------------------------------------------------------
// Askama templates (embedded via `source`)
// ---------------------------------------------------------------------------
/// Landing / index page template.
#[derive(Template)]
#[template(
source = r##"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>btest-rs — Free Public Bandwidth Test Server</title>
<style>
*{margin:0;padding:0;box-sizing:border-box}
body{font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;background:#0f1117;color:#e1e4e8;min-height:100vh;display:flex;flex-direction:column;align-items:center;padding:2rem 1rem}
.container{max-width:720px;width:100%;padding:1rem 0}
h1{font-size:2.2rem;margin-bottom:.25rem;color:#58a6ff;text-align:center}
.subtitle{color:#8b949e;margin-bottom:2.5rem;line-height:1.6;text-align:center;font-size:1.05rem}
.section{background:#161b22;border:1px solid #30363d;border-radius:8px;padding:1.5rem;margin-bottom:1.5rem;text-align:left;line-height:1.7;color:#c9d1d9}
.section h2{color:#e1e4e8;font-size:1.15rem;margin-bottom:.75rem}
.section h3{color:#e1e4e8;font-size:1rem;margin-bottom:.5rem;margin-top:1rem}
.section h3:first-child{margin-top:0}
.section p{margin-bottom:.5rem}
.section ul{margin:.5rem 0 .5rem 1.5rem;color:#8b949e}
.section li{margin-bottom:.35rem}
code{background:#0d1117;padding:.2rem .5rem;border-radius:4px;font-size:.85em;color:#58a6ff;word-break:break-all}
pre{background:#0d1117;border:1px solid #30363d;border-radius:6px;padding:1rem;overflow-x:auto;margin:.75rem 0;line-height:1.5}
pre code{padding:0;background:none;font-size:.85em}
.label-tag{display:inline-block;padding:.15rem .5rem;border-radius:4px;font-size:.75rem;font-weight:600;text-transform:uppercase;letter-spacing:.03em;margin-right:.5rem;vertical-align:middle}
.tag-tcp{background:rgba(63,185,80,0.15);color:#3fb950}
.tag-udp{background:rgba(210,153,34,0.15);color:#d29922}
.note{background:#1c1e26;border-left:3px solid #d29922;padding:.75rem 1rem;border-radius:0 6px 6px 0;margin:.75rem 0;font-size:.92rem;color:#8b949e}
.note strong{color:#d29922}
.search-section{text-align:center}
.search-section h2{text-align:center}
.search-box{display:flex;gap:.5rem;margin-bottom:1rem}
.search-box input{flex:1;padding:.75rem 1rem;border:1px solid #30363d;border-radius:6px;background:#161b22;color:#e1e4e8;font-size:1rem;outline:none}
.search-box input:focus{border-color:#58a6ff}
.search-box input::placeholder{color:#484f58}
.search-box button{padding:.75rem 1.5rem;background:#238636;color:#fff;border:none;border-radius:6px;font-size:1rem;cursor:pointer;white-space:nowrap}
.search-box button:hover{background:#2ea043}
.auto-link{font-size:.9rem;color:#8b949e}
.auto-link a{color:#58a6ff;text-decoration:none}
.auto-link a:hover{text-decoration:underline}
.footer{margin-top:2rem;color:#484f58;font-size:.8rem;text-align:center}
.footer a{color:#58a6ff;text-decoration:none}
.footer a:hover{text-decoration:underline}
</style>
</head>
<body>
<div class="container">
<h1>btest-rs</h1>
<p class="subtitle">Free public MikroTik-compatible bandwidth test server.<br>Test your link speed from any RouterOS device &mdash; no registration required.</p>
<div class="section">
<h2>Quick Start</h2>
<p>Open a terminal on your MikroTik router and run one of the following commands:</p>
<h3><span class="label-tag tag-tcp">TCP</span> Recommended</h3>
<pre><code>/tool bandwidth-test address=104.225.217.60 user=btest password=btest protocol=tcp direction=both</code></pre>
<h3><span class="label-tag tag-udp">UDP</span></h3>
<pre><code>/tool bandwidth-test address=104.225.217.60 user=btest password=btest protocol=udp direction=both</code></pre>
</div>
<div class="section">
<h2>Important Notes</h2>
<ul>
<li><strong style="color:#e1e4e8">Credentials:</strong> <code>user=btest</code> <code>password=btest</code></li>
<li><strong style="color:#e1e4e8">TCP is recommended</strong> for remote testing &mdash; it works reliably through any NAT or firewall</li>
<li><strong style="color:#e1e4e8">Per-IP daily quotas</strong> apply to keep the service fair for everyone</li>
<li><strong style="color:#e1e4e8">Maximum test duration:</strong> 120 seconds</li>
<li><strong style="color:#e1e4e8">Connection limit:</strong> 3 concurrent tests per IP</li>
</ul>
<div class="note">
<strong>UDP bidirectional may not work through NAT/firewall.</strong>
UDP <code>direction=both</code> requires the server to send packets to a pre-calculated client port, which NAT routers typically block. If you need UDP testing:<br>
&bull; Forward UDP ports 2001&ndash;2100 on your router, or<br>
&bull; Use <code>direction=send</code> or <code>direction=receive</code> (one-way works fine), or<br>
&bull; Test from a device with a public IP
</div>
</div>
<div class="section search-section">
<h2>Check Your Results</h2>
<p style="margin-bottom:1rem;color:#8b949e">After running a test, enter your public IP to view throughput charts, session history, and statistics.</p>
<form class="search-box" id="ip-form" onsubmit="return goToDashboard()">
<input type="text" id="ip-input" placeholder="Enter your IP address (e.g. 203.0.113.5)" autocomplete="off">
<button type="submit">View Results</button>
</form>
<div class="auto-link" id="auto-detect">Detecting your IP...</div>
</div>
<div class="footer">Powered by <a href="https://github.com/manawenuz/btest-rs">btest-rs</a> &mdash; open source MikroTik bandwidth test server</div>
</div>
<script>
function goToDashboard(){var ip=document.getElementById('ip-input').value.trim();if(ip){window.location.href='/dashboard/'+encodeURIComponent(ip);}return false;}
fetch('https://api.ipify.org?format=json')
.then(function(r){return r.json();})
.then(function(d){if(d.ip){document.getElementById('ip-input').value=d.ip;document.getElementById('auto-detect').innerHTML='Detected IP: <a href="/dashboard/'+encodeURIComponent(d.ip)+'">'+d.ip+'</a> &mdash; click to view your dashboard';}})
.catch(function(){document.getElementById('auto-detect').textContent='';});
</script>
</body>
</html>"##,
ext = "html"
)]
struct IndexTemplate;
/// Per-IP dashboard page template.
#[derive(Template)]
#[template(
source = r##"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Dashboard &mdash; {{ ip }} &mdash; btest-rs</title>
<style>
*{margin:0;padding:0;box-sizing:border-box}
body{font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;background:#0f1117;color:#e1e4e8;min-height:100vh;padding:1.5rem}
a{color:#58a6ff;text-decoration:none}a:hover{text-decoration:underline}
.header{display:flex;align-items:center;gap:1rem;margin-bottom:1.5rem;flex-wrap:wrap}
.header h1{font-size:1.5rem;color:#58a6ff}
.header .ip-label{font-size:1.1rem;color:#8b949e;font-family:monospace}
.header .home-link{margin-left:auto}
.btn{display:inline-block;padding:.5rem 1rem;border-radius:6px;font-size:.85rem;font-weight:500;cursor:pointer;border:1px solid #30363d;text-decoration:none}
.btn-json{background:#161b22;color:#3fb950}.btn-json:hover{background:#1c2128;text-decoration:none}
.stats{display:grid;grid-template-columns:repeat(auto-fit,minmax(160px,1fr));gap:1rem;margin-bottom:1.5rem}
.stat-card{background:#161b22;border:1px solid #30363d;border-radius:8px;padding:1rem}
.stat-card .label{color:#8b949e;font-size:.8rem;text-transform:uppercase;letter-spacing:.05em}
.stat-card .value{font-size:1.4rem;font-weight:600;margin-top:.25rem}
.table-wrap{overflow-x:auto;margin-bottom:1.5rem}
table{width:100%;border-collapse:collapse;background:#161b22;border-radius:8px;overflow:hidden}
th,td{padding:.6rem 1rem;text-align:left;border-bottom:1px solid #21262d;white-space:nowrap}
th{background:#0d1117;color:#8b949e;font-size:.8rem;text-transform:uppercase;letter-spacing:.04em}
tr{cursor:pointer}tr:hover td{background:#1c2128}tr.selected td{background:#1f3a5f}
.proto-tcp{color:#3fb950}.proto-udp{color:#d29922}
.dir-tx{color:#f78166}.dir-rx{color:#58a6ff}.dir-both{color:#bc8cff}
.chart-section{background:#161b22;border:1px solid #30363d;border-radius:8px;padding:1.5rem;margin-bottom:1.5rem}
.chart-section h2{font-size:1rem;color:#8b949e;margin-bottom:1rem}
.chart-container{position:relative;width:100%;max-height:360px}
.chart-placeholder{text-align:center;color:#484f58;padding:3rem 0}
.footer{text-align:center;color:#484f58;font-size:.8rem;margin-top:2rem}
.no-data{text-align:center;padding:3rem;color:#484f58}
.quota-section{background:#161b22;border:1px solid #30363d;border-radius:8px;padding:1.25rem;margin-bottom:1.5rem}
.quota-section h2{font-size:1rem;color:#8b949e;margin-bottom:1rem}
.quota-row{display:flex;align-items:center;gap:1rem;margin-bottom:.75rem}
.quota-row:last-child{margin-bottom:0}
.quota-label{min-width:70px;font-size:.85rem;color:#8b949e;text-transform:uppercase;letter-spacing:.04em}
.quota-bar-wrap{flex:1;background:#21262d;border-radius:4px;height:22px;position:relative;overflow:hidden}
.quota-bar{height:100%;border-radius:4px;transition:width .5s ease}
.quota-bar.low{background:#238636}.quota-bar.mid{background:#d29922}.quota-bar.high{background:#da3633}
.quota-text{min-width:180px;font-size:.85rem;color:#e1e4e8;text-align:right;font-family:monospace}
</style>
</head>
<body>
<div class="header">
<h1>btest-rs</h1>
<span class="ip-label">{{ ip }}</span>
<a class="btn btn-json" href="/api/ip/{{ ip }}/export" download>Export JSON</a>
<span class="home-link"><a href="/">Home</a></span>
</div>
<div class="stats" id="stats-grid">
<div class="stat-card"><div class="label">Total Tests</div><div class="value" id="stat-total-tests">&mdash;</div></div>
<div class="stat-card"><div class="label">Total TX</div><div class="value" id="stat-total-tx">&mdash;</div></div>
<div class="stat-card"><div class="label">Total RX</div><div class="value" id="stat-total-rx">&mdash;</div></div>
<div class="stat-card"><div class="label">Avg TX Mbps</div><div class="value" id="stat-avg-tx">&mdash;</div></div>
<div class="stat-card"><div class="label">Avg RX Mbps</div><div class="value" id="stat-avg-rx">&mdash;</div></div>
</div>
<div class="quota-section" id="quota-section">
<h2>Quota Usage</h2>
<div class="quota-row"><span class="quota-label">Daily</span><div class="quota-bar-wrap"><div class="quota-bar low" id="bar-daily" style="width:0%"></div></div><span class="quota-text" id="text-daily">&mdash;</span></div>
<div class="quota-row"><span class="quota-label">Weekly</span><div class="quota-bar-wrap"><div class="quota-bar low" id="bar-weekly" style="width:0%"></div></div><span class="quota-text" id="text-weekly">&mdash;</span></div>
<div class="quota-row"><span class="quota-label">Monthly</span><div class="quota-bar-wrap"><div class="quota-bar low" id="bar-monthly" style="width:0%"></div></div><span class="quota-text" id="text-monthly">&mdash;</span></div>
</div>
<div class="chart-section">
<h2 id="chart-title">Select a test below to view its throughput chart</h2>
<div class="chart-container">
<canvas id="throughput-chart"></canvas>
<div class="chart-placeholder" id="chart-placeholder">Click a row in the table to load the throughput graph for that session.</div>
</div>
</div>
<div class="table-wrap">
<table>
<thead><tr><th>#</th><th>Date</th><th>Protocol</th><th>Direction</th><th>TX Bytes</th><th>RX Bytes</th><th>Duration</th><th>Avg TX Mbps</th><th>Avg RX Mbps</th></tr></thead>
<tbody id="sessions-body"><tr><td colspan="9" class="no-data">Loading sessions...</td></tr></tbody>
</table>
</div>
<div class="footer">Powered by btest-rs</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
var currentIp="{{ ip }}";
var throughputChart=null;
function formatBytes(b){if(b===0)return'0 B';var u=['B','KB','MB','GB','TB'];var i=Math.floor(Math.log(b)/Math.log(1024));if(i>=u.length)i=u.length-1;return(b/Math.pow(1024,i)).toFixed(1)+' '+u[i];}
function formatMbps(bps){return(bps*8/1e6).toFixed(2);}
fetch('/api/ip/'+encodeURIComponent(currentIp)+'/quota').then(function(r){return r.json();}).then(function(q){
function upd(id,used,limit){
var pct=limit>0?Math.min(used/limit*100,100):0;
var bar=document.getElementById('bar-'+id);
var txt=document.getElementById('text-'+id);
bar.style.width=pct.toFixed(1)+'%';
bar.className='quota-bar '+(pct<50?'low':pct<80?'mid':'high');
txt.textContent=formatBytes(used)+' / '+formatBytes(limit)+' ('+pct.toFixed(1)+'%)';
}
upd('daily',q.daily_used,q.daily_limit);
upd('weekly',q.weekly_used,q.weekly_limit);
upd('monthly',q.monthly_used,q.monthly_limit);
}).catch(function(){});
function durationStr(s,e){if(!s||!e)return'--';var ms=new Date(e)-new Date(s);if(ms<0)return'--';var sec=Math.round(ms/1000);if(sec<60)return sec+'s';return Math.floor(sec/60)+'m '+(sec%60)+'s';}
function durationSec(s,e){if(!s||!e)return 0;return Math.max((new Date(e)-new Date(s))/1000,0.001);}
fetch('/api/ip/'+encodeURIComponent(currentIp)+'/stats').then(function(r){return r.json();}).then(function(d){
document.getElementById('stat-total-tests').textContent=d.total_sessions||0;
document.getElementById('stat-total-tx').textContent=formatBytes(d.total_tx_bytes||0);
document.getElementById('stat-total-rx').textContent=formatBytes(d.total_rx_bytes||0);
document.getElementById('stat-avg-tx').textContent=d.avg_tx_mbps?d.avg_tx_mbps.toFixed(2):'0.00';
document.getElementById('stat-avg-rx').textContent=d.avg_rx_mbps?d.avg_rx_mbps.toFixed(2):'0.00';
}).catch(function(){});
fetch('/api/ip/'+encodeURIComponent(currentIp)+'/sessions').then(function(r){return r.json();}).then(function(sessions){
var tbody=document.getElementById('sessions-body');
if(!sessions||sessions.length===0){tbody.innerHTML='<tr><td colspan="9" class="no-data">No test sessions found for this IP.</td></tr>';return;}
tbody.innerHTML='';
sessions.forEach(function(s,i){
var tr=document.createElement('tr');tr.dataset.sessionId=s.id;tr.onclick=function(){selectSession(s.id,tr);};
var dur=durationSec(s.started_at,s.ended_at);var avgTx=dur>0?formatMbps(s.tx_bytes/dur):'0.00';var avgRx=dur>0?formatMbps(s.rx_bytes/dur):'0.00';
var proto=(s.protocol||'TCP').toUpperCase();var dir=(s.direction||'BOTH').toUpperCase();
var pc=proto==='UDP'?'proto-udp':'proto-tcp';var dc=dir==='TX'?'dir-tx':dir==='RX'?'dir-rx':'dir-both';
tr.innerHTML='<td>'+(i+1)+'</td><td>'+(s.started_at||'--')+'</td><td class="'+pc+'">'+proto+'</td><td class="'+dc+'">'+dir+'</td><td>'+formatBytes(s.tx_bytes||0)+'</td><td>'+formatBytes(s.rx_bytes||0)+'</td><td>'+durationStr(s.started_at,s.ended_at)+'</td><td>'+avgTx+'</td><td>'+avgRx+'</td>';
tbody.appendChild(tr);
});
if(sessions.length>0){var fr=tbody.querySelector('tr');if(fr)selectSession(sessions[0].id,fr);}
}).catch(function(){document.getElementById('sessions-body').innerHTML='<tr><td colspan="9" class="no-data">Failed to load sessions.</td></tr>';});
function selectSession(sid,row){
document.querySelectorAll('#sessions-body tr').forEach(function(r){r.classList.remove('selected');});
row.classList.add('selected');
document.getElementById('chart-title').textContent='Throughput for session #'+sid;
document.getElementById('chart-placeholder').style.display='none';
fetch('/api/session/'+sid+'/intervals').then(function(r){return r.json();}).then(function(iv){renderChart(iv);}).catch(function(){
document.getElementById('chart-placeholder').style.display='block';
document.getElementById('chart-placeholder').textContent='Failed to load interval data.';
});
}
function renderChart(iv){
var canvas=document.getElementById('throughput-chart');
if(throughputChart)throughputChart.destroy();
if(!iv||iv.length===0){document.getElementById('chart-placeholder').style.display='block';document.getElementById('chart-placeholder').textContent='No interval data available for this session.';return;}
var labels=iv.map(function(d){return d.second+'s';});
var tx=iv.map(function(d){return(d.tx_bytes*8/1e6).toFixed(2);});
var rx=iv.map(function(d){return(d.rx_bytes*8/1e6).toFixed(2);});
throughputChart=new Chart(canvas,{type:'line',data:{labels:labels,datasets:[
{label:'TX Mbps',data:tx,borderColor:'#f78166',backgroundColor:'rgba(247,129,102,0.1)',borderWidth:2,fill:true,tension:0.3,pointRadius:1},
{label:'RX Mbps',data:rx,borderColor:'#58a6ff',backgroundColor:'rgba(88,166,255,0.1)',borderWidth:2,fill:true,tension:0.3,pointRadius:1}
]},options:{responsive:true,maintainAspectRatio:false,interaction:{intersect:false,mode:'index'},
scales:{x:{title:{display:true,text:'Time',color:'#8b949e'},ticks:{color:'#8b949e'},grid:{color:'#21262d'}},
y:{title:{display:true,text:'Mbps',color:'#8b949e'},ticks:{color:'#8b949e'},grid:{color:'#21262d'},beginAtZero:true}},
plugins:{legend:{labels:{color:'#e1e4e8'}},tooltip:{backgroundColor:'#161b22',borderColor:'#30363d',borderWidth:1,titleColor:'#e1e4e8',bodyColor:'#8b949e'}}}});
}
</script>
</body>
</html>"##,
ext = "html"
)]
struct DashboardTemplate {
ip: String,
}
// ---------------------------------------------------------------------------
// JSON response types
// ---------------------------------------------------------------------------
/// A single test session as returned by the sessions API.
#[derive(Serialize)]
struct SessionJson {
id: i64,
username: String,
peer_ip: String,
started_at: Option<String>,
ended_at: Option<String>,
tx_bytes: i64,
rx_bytes: i64,
protocol: Option<String>,
direction: Option<String>,
}
/// Aggregate statistics for an IP address.
#[derive(Serialize)]
struct StatsJson {
total_sessions: i64,
total_tx_bytes: i64,
total_rx_bytes: i64,
avg_tx_mbps: f64,
avg_rx_mbps: f64,
}
/// One second of throughput data within a session.
#[derive(Serialize)]
struct IntervalJson {
second: i64,
tx_bytes: i64,
rx_bytes: i64,
}
// ---------------------------------------------------------------------------
// Error helper
// ---------------------------------------------------------------------------
/// Uniform error wrapper so handlers can use `?` freely.
///
/// All errors are rendered as `500 Internal Server Error` with a plain-text
/// body. The full error chain is logged via [`tracing`].
struct AppError(anyhow::Error);
impl IntoResponse for AppError {
fn into_response(self) -> Response {
tracing::error!("web handler error: {:#}", self.0);
(StatusCode::INTERNAL_SERVER_ERROR, self.0.to_string()).into_response()
}
}
impl<E: Into<anyhow::Error>> From<E> for AppError {
fn from(err: E) -> Self {
Self(err.into())
}
}
// ---------------------------------------------------------------------------
// Handlers
// ---------------------------------------------------------------------------
/// `GET /` -- render the landing page.
async fn index_page() -> Result<Html<String>, AppError> {
let rendered = IndexTemplate
.render()
.map_err(|e| anyhow::anyhow!("template render: {}", e))?;
Ok(Html(rendered))
}
/// `GET /dashboard/{ip}` -- render the per-IP dashboard.
async fn dashboard_page(Path(ip): Path<String>) -> Result<Html<String>, AppError> {
let rendered = DashboardTemplate { ip }
.render()
.map_err(|e| anyhow::anyhow!("template render: {}", e))?;
Ok(Html(rendered))
}
/// `GET /api/ip/{ip}/sessions` -- return the most recent 100 sessions for
/// the given peer IP as a JSON array.
async fn api_sessions(
State(state): State<Arc<WebState>>,
Path(ip): Path<String>,
) -> Result<axum::Json<Vec<SessionJson>>, AppError> {
let sessions = {
let conn = state
.query_conn
.lock()
.map_err(|e| anyhow::anyhow!("lock: {}", e))?;
let mut stmt = conn.prepare(
"SELECT id, username, peer_ip, started_at, ended_at,
tx_bytes, rx_bytes, protocol, direction
FROM sessions
WHERE peer_ip = ?1
ORDER BY started_at DESC
LIMIT 100",
)?;
let rows = stmt.query_map(params![ip], |row| {
Ok(SessionJson {
id: row.get(0)?,
username: row.get(1)?,
peer_ip: row.get(2)?,
started_at: row.get(3)?,
ended_at: row.get(4)?,
tx_bytes: row.get(5)?,
rx_bytes: row.get(6)?,
protocol: row.get(7)?,
direction: row.get(8)?,
})
})?;
rows.filter_map(Result::ok).collect::<Vec<_>>()
};
Ok(axum::Json(sessions))
}
/// `GET /api/ip/{ip}/stats` -- return aggregate statistics (total bytes,
/// session count, average throughput) for the given IP.
async fn api_stats(
State(state): State<Arc<WebState>>,
Path(ip): Path<String>,
) -> Result<axum::Json<StatsJson>, AppError> {
let stats = {
let conn = state
.query_conn
.lock()
.map_err(|e| anyhow::anyhow!("lock: {}", e))?;
conn.query_row(
"SELECT
COUNT(*) AS total_sessions,
COALESCE(SUM(tx_bytes), 0) AS total_tx,
COALESCE(SUM(rx_bytes), 0) AS total_rx,
COALESCE(SUM(
CASE WHEN ended_at IS NOT NULL AND started_at IS NOT NULL
THEN (julianday(ended_at) - julianday(started_at)) * 86400.0
ELSE 0 END
), 0) AS total_seconds
FROM sessions
WHERE peer_ip = ?1",
params![ip],
|row| {
let total_sessions: i64 = row.get(0)?;
let total_tx: i64 = row.get(1)?;
let total_rx: i64 = row.get(2)?;
let total_seconds: f64 = row.get(3)?;
let avg_tx_mbps = if total_seconds > 0.0 {
(total_tx as f64) * 8.0 / total_seconds / 1_000_000.0
} else {
0.0
};
let avg_rx_mbps = if total_seconds > 0.0 {
(total_rx as f64) * 8.0 / total_seconds / 1_000_000.0
} else {
0.0
};
Ok(StatsJson {
total_sessions,
total_tx_bytes: total_tx,
total_rx_bytes: total_rx,
avg_tx_mbps,
avg_rx_mbps,
})
},
)?
};
Ok(axum::Json(stats))
}
/// Quota usage for an IP — daily/weekly/monthly with limits.
#[derive(Serialize)]
struct QuotaUsageJson {
daily_used: i64,
daily_limit: i64,
weekly_used: i64,
weekly_limit: i64,
monthly_used: i64,
monthly_limit: i64,
}
/// `GET /api/ip/{ip}/quota` -- return current quota usage for the IP.
async fn api_quota(
State(state): State<Arc<WebState>>,
Path(ip): Path<String>,
) -> Result<axum::Json<QuotaUsageJson>, AppError> {
let conn = state.query_conn.lock().map_err(|e| anyhow::anyhow!("lock: {}", e))?;
let daily: i64 = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes + outbound_bytes), 0) FROM ip_usage WHERE ip = ?1 AND date = date('now')",
params![ip], |row| row.get(0),
).unwrap_or(0);
let weekly: i64 = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes + outbound_bytes), 0) FROM ip_usage WHERE ip = ?1 AND date >= date('now', '-7 days')",
params![ip], |row| row.get(0),
).unwrap_or(0);
let monthly: i64 = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes + outbound_bytes), 0) FROM ip_usage WHERE ip = ?1 AND date >= date('now', '-30 days')",
params![ip], |row| row.get(0),
).unwrap_or(0);
// Limits: 2GB daily, 8GB weekly, 24GB monthly
Ok(axum::Json(QuotaUsageJson {
daily_used: daily,
daily_limit: 2_147_483_648,
weekly_used: weekly,
weekly_limit: 8_589_934_592,
monthly_used: monthly,
monthly_limit: 25_769_803_776,
}))
}
/// Full export of all data for an IP — stats + sessions with human-readable fields.
#[derive(Serialize)]
struct ExportJson {
ip: String,
exported_at: String,
stats: StatsJson,
quota: QuotaJson,
sessions: Vec<ExportSessionJson>,
}
#[derive(Serialize)]
struct QuotaJson {
daily_used_bytes: i64,
daily_used_human: String,
daily_limit_bytes: String,
}
#[derive(Serialize)]
struct ExportSessionJson {
id: i64,
started_at: Option<String>,
ended_at: Option<String>,
protocol: Option<String>,
direction: Option<String>,
tx_bytes: i64,
rx_bytes: i64,
tx_human: String,
rx_human: String,
duration_secs: f64,
avg_tx_mbps: f64,
avg_rx_mbps: f64,
}
fn human_bytes(b: i64) -> String {
let b = b as f64;
if b >= 1_073_741_824.0 {
format!("{:.2} GB", b / 1_073_741_824.0)
} else if b >= 1_048_576.0 {
format!("{:.1} MB", b / 1_048_576.0)
} else if b >= 1024.0 {
format!("{:.1} KB", b / 1024.0)
} else {
format!("{} B", b as i64)
}
}
/// `GET /api/ip/{ip}/export` -- return a comprehensive JSON export of all
/// sessions, stats, and quota usage for an IP. Suitable for download/archival.
async fn api_export(
State(state): State<Arc<WebState>>,
Path(ip): Path<String>,
) -> Result<impl IntoResponse, AppError> {
let conn = state
.query_conn
.lock()
.map_err(|e| anyhow::anyhow!("lock: {}", e))?;
// Stats
let stats = conn.query_row(
"SELECT COUNT(*), COALESCE(SUM(tx_bytes),0), COALESCE(SUM(rx_bytes),0),
COALESCE(SUM(CASE WHEN ended_at IS NOT NULL AND started_at IS NOT NULL
THEN (julianday(ended_at)-julianday(started_at))*86400.0 ELSE 0 END),0)
FROM sessions WHERE peer_ip = ?1",
params![ip],
|row| {
let n: i64 = row.get(0)?;
let tx: i64 = row.get(1)?;
let rx: i64 = row.get(2)?;
let secs: f64 = row.get(3)?;
Ok(StatsJson {
total_sessions: n,
total_tx_bytes: tx,
total_rx_bytes: rx,
avg_tx_mbps: if secs > 0.0 { tx as f64 * 8.0 / secs / 1e6 } else { 0.0 },
avg_rx_mbps: if secs > 0.0 { rx as f64 * 8.0 / secs / 1e6 } else { 0.0 },
})
},
)?;
// Quota
let daily_used: i64 = conn.query_row(
"SELECT COALESCE(SUM(inbound_bytes + outbound_bytes), 0) FROM ip_usage
WHERE ip = ?1 AND date = date('now')",
params![ip],
|row| row.get(0),
).unwrap_or(0);
let quota = QuotaJson {
daily_used_bytes: daily_used,
daily_used_human: human_bytes(daily_used),
daily_limit_bytes: "see server config".to_string(),
};
// Sessions with computed fields (duration computed by SQLite)
let mut stmt = conn.prepare(
"SELECT id, started_at, ended_at, protocol, direction, tx_bytes, rx_bytes,
CASE WHEN ended_at IS NOT NULL AND started_at IS NOT NULL
THEN (julianday(ended_at) - julianday(started_at)) * 86400.0
ELSE 0 END AS dur_secs
FROM sessions WHERE peer_ip = ?1 ORDER BY started_at DESC LIMIT 100",
)?;
let sessions: Vec<ExportSessionJson> = stmt.query_map(params![ip], |row| {
let tx: i64 = row.get(5)?;
let rx: i64 = row.get(6)?;
let dur: f64 = row.get(7)?;
Ok(ExportSessionJson {
id: row.get(0)?,
started_at: row.get(1)?,
ended_at: row.get(2)?,
protocol: row.get(3)?,
direction: row.get(4)?,
tx_bytes: tx,
rx_bytes: rx,
tx_human: human_bytes(tx),
rx_human: human_bytes(rx),
duration_secs: dur,
avg_tx_mbps: if dur > 0.0 { tx as f64 * 8.0 / dur / 1e6 } else { 0.0 },
avg_rx_mbps: if dur > 0.0 { rx as f64 * 8.0 / dur / 1e6 } else { 0.0 },
})
})?.filter_map(Result::ok).collect();
let export = ExportJson {
ip: ip.clone(),
exported_at: {
// Simple UTC timestamp without chrono
use std::time::{SystemTime, UNIX_EPOCH};
let secs = SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_secs();
format!("{}", secs) // Unix timestamp — universally parseable
},
stats,
quota,
sessions,
};
let json_string = serde_json::to_string_pretty(&export)
.map_err(|e| anyhow::anyhow!("json serialize: {}", e))?;
Ok((
StatusCode::OK,
[
(axum::http::header::CONTENT_TYPE, "application/json".to_string()),
(axum::http::header::CONTENT_DISPOSITION,
format!("attachment; filename=\"btest-{}.json\"", ip)),
],
json_string,
))
}
/// `GET /api/session/{id}/intervals` -- return per-second throughput data
/// for a session.
///
/// If the `session_intervals` table does not exist or contains no rows for
/// the requested session, an empty JSON array is returned.
async fn api_intervals(
State(state): State<Arc<WebState>>,
Path(id): Path<i64>,
) -> Result<axum::Json<Vec<IntervalJson>>, AppError> {
let intervals = {
let conn = state
.query_conn
.lock()
.map_err(|e| anyhow::anyhow!("lock: {}", e))?;
// Guard against the table not existing (e.g. first run before
// `ensure_web_tables` was ever called on this database file).
let table_exists: bool = conn
.query_row(
"SELECT COUNT(*) FROM sqlite_master \
WHERE type = 'table' AND name = 'session_intervals'",
[],
|row| row.get::<_, i64>(0),
)
.map(|c| c > 0)
.unwrap_or(false);
if !table_exists {
Vec::new()
} else {
let mut stmt = conn.prepare(
"SELECT second, tx_bytes, rx_bytes
FROM session_intervals
WHERE session_id = ?1
ORDER BY second ASC",
)?;
let rows = stmt.query_map(params![id], |row| {
Ok(IntervalJson {
second: row.get(0)?,
tx_bytes: row.get(1)?,
rx_bytes: row.get(2)?,
})
})?;
rows.filter_map(Result::ok).collect::<Vec<_>>()
}
};
Ok(axum::Json(intervals))
}

View File

@@ -0,0 +1,387 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Dashboard &mdash; {{ ip }} &mdash; btest-rs</title>
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
background: #0f1117;
color: #e1e4e8;
min-height: 100vh;
padding: 1.5rem;
}
a { color: #58a6ff; text-decoration: none; }
a:hover { text-decoration: underline; }
.header {
display: flex;
align-items: center;
gap: 1rem;
margin-bottom: 1.5rem;
flex-wrap: wrap;
}
.header h1 { font-size: 1.5rem; color: #58a6ff; }
.header .ip-label {
font-size: 1.1rem;
color: #8b949e;
font-family: monospace;
}
.header .home-link { margin-left: auto; }
/* Stats cards */
.stats {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(160px, 1fr));
gap: 1rem;
margin-bottom: 1.5rem;
}
.stat-card {
background: #161b22;
border: 1px solid #30363d;
border-radius: 8px;
padding: 1rem;
}
.stat-card .label {
color: #8b949e;
font-size: 0.8rem;
text-transform: uppercase;
letter-spacing: 0.05em;
}
.stat-card .value {
font-size: 1.4rem;
font-weight: 600;
margin-top: 0.25rem;
}
/* Table */
.table-wrap {
overflow-x: auto;
margin-bottom: 1.5rem;
}
table {
width: 100%;
border-collapse: collapse;
background: #161b22;
border-radius: 8px;
overflow: hidden;
}
th, td {
padding: 0.6rem 1rem;
text-align: left;
border-bottom: 1px solid #21262d;
white-space: nowrap;
}
th {
background: #0d1117;
color: #8b949e;
font-size: 0.8rem;
text-transform: uppercase;
letter-spacing: 0.04em;
}
tr { cursor: pointer; }
tr:hover td { background: #1c2128; }
tr.selected td { background: #1f3a5f; }
.proto-tcp { color: #3fb950; }
.proto-udp { color: #d29922; }
.dir-tx { color: #f78166; }
.dir-rx { color: #58a6ff; }
.dir-both { color: #bc8cff; }
/* Chart area */
.chart-section {
background: #161b22;
border: 1px solid #30363d;
border-radius: 8px;
padding: 1.5rem;
margin-bottom: 1.5rem;
}
.chart-section h2 {
font-size: 1rem;
color: #8b949e;
margin-bottom: 1rem;
}
.chart-container {
position: relative;
width: 100%;
max-height: 360px;
}
.chart-placeholder {
text-align: center;
color: #484f58;
padding: 3rem 0;
}
.footer {
text-align: center;
color: #484f58;
font-size: 0.8rem;
margin-top: 2rem;
}
.no-data {
text-align: center;
padding: 3rem;
color: #484f58;
}
</style>
</head>
<body>
<div class="header">
<h1>btest-rs</h1>
<span class="ip-label">{{ ip }}</span>
<span class="home-link"><a href="/">Home</a></span>
</div>
<!-- Stats summary (filled via API) -->
<div class="stats" id="stats-grid">
<div class="stat-card">
<div class="label">Total Tests</div>
<div class="value" id="stat-total-tests">&mdash;</div>
</div>
<div class="stat-card">
<div class="label">Total TX</div>
<div class="value" id="stat-total-tx">&mdash;</div>
</div>
<div class="stat-card">
<div class="label">Total RX</div>
<div class="value" id="stat-total-rx">&mdash;</div>
</div>
<div class="stat-card">
<div class="label">Avg TX Mbps</div>
<div class="value" id="stat-avg-tx">&mdash;</div>
</div>
<div class="stat-card">
<div class="label">Avg RX Mbps</div>
<div class="value" id="stat-avg-rx">&mdash;</div>
</div>
</div>
<!-- Chart for selected session -->
<div class="chart-section">
<h2 id="chart-title">Select a test below to view its throughput chart</h2>
<div class="chart-container">
<canvas id="throughput-chart"></canvas>
<div class="chart-placeholder" id="chart-placeholder">Click a row in the table to load the throughput graph for that session.</div>
</div>
</div>
<!-- Sessions table -->
<div class="table-wrap">
<table>
<thead>
<tr>
<th>#</th>
<th>Date</th>
<th>Protocol</th>
<th>Direction</th>
<th>TX Bytes</th>
<th>RX Bytes</th>
<th>Duration</th>
<th>Avg TX Mbps</th>
<th>Avg RX Mbps</th>
</tr>
</thead>
<tbody id="sessions-body">
<tr><td colspan="9" class="no-data">Loading sessions...</td></tr>
</tbody>
</table>
</div>
<div class="footer">Powered by btest-rs</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
var currentIp = "{{ ip }}";
var throughputChart = null;
function formatBytes(b) {
if (b === 0) return '0 B';
var units = ['B', 'KB', 'MB', 'GB', 'TB'];
var i = Math.floor(Math.log(b) / Math.log(1024));
if (i >= units.length) i = units.length - 1;
return (b / Math.pow(1024, i)).toFixed(1) + ' ' + units[i];
}
function formatMbps(bytesPerSec) {
return (bytesPerSec * 8 / 1e6).toFixed(2);
}
function durationStr(startedAt, endedAt) {
if (!startedAt || !endedAt) return '--';
var ms = new Date(endedAt) - new Date(startedAt);
if (ms < 0) return '--';
var s = Math.round(ms / 1000);
if (s < 60) return s + 's';
return Math.floor(s / 60) + 'm ' + (s % 60) + 's';
}
function durationSec(startedAt, endedAt) {
if (!startedAt || !endedAt) return 0;
var ms = new Date(endedAt) - new Date(startedAt);
return Math.max(ms / 1000, 0.001);
}
// Load summary stats
fetch('/api/ip/' + encodeURIComponent(currentIp) + '/stats')
.then(function(r) { return r.json(); })
.then(function(data) {
document.getElementById('stat-total-tests').textContent = data.total_sessions || 0;
document.getElementById('stat-total-tx').textContent = formatBytes(data.total_tx_bytes || 0);
document.getElementById('stat-total-rx').textContent = formatBytes(data.total_rx_bytes || 0);
document.getElementById('stat-avg-tx').textContent = data.avg_tx_mbps ? data.avg_tx_mbps.toFixed(2) : '0.00';
document.getElementById('stat-avg-rx').textContent = data.avg_rx_mbps ? data.avg_rx_mbps.toFixed(2) : '0.00';
})
.catch(function() {});
// Load sessions list
fetch('/api/ip/' + encodeURIComponent(currentIp) + '/sessions')
.then(function(r) { return r.json(); })
.then(function(sessions) {
var tbody = document.getElementById('sessions-body');
if (!sessions || sessions.length === 0) {
tbody.innerHTML = '<tr><td colspan="9" class="no-data">No test sessions found for this IP.</td></tr>';
return;
}
tbody.innerHTML = '';
sessions.forEach(function(s, i) {
var tr = document.createElement('tr');
tr.dataset.sessionId = s.id;
tr.onclick = function() { selectSession(s.id, tr); };
var dur = durationSec(s.started_at, s.ended_at);
var avgTx = dur > 0 ? formatMbps(s.tx_bytes / dur) : '0.00';
var avgRx = dur > 0 ? formatMbps(s.rx_bytes / dur) : '0.00';
var proto = (s.protocol || 'TCP').toUpperCase();
var dir = (s.direction || 'BOTH').toUpperCase();
var protoClass = proto === 'UDP' ? 'proto-udp' : 'proto-tcp';
var dirClass = dir === 'TX' ? 'dir-tx' : dir === 'RX' ? 'dir-rx' : 'dir-both';
tr.innerHTML =
'<td>' + (i + 1) + '</td>' +
'<td>' + (s.started_at || '--') + '</td>' +
'<td class="' + protoClass + '">' + proto + '</td>' +
'<td class="' + dirClass + '">' + dir + '</td>' +
'<td>' + formatBytes(s.tx_bytes || 0) + '</td>' +
'<td>' + formatBytes(s.rx_bytes || 0) + '</td>' +
'<td>' + durationStr(s.started_at, s.ended_at) + '</td>' +
'<td>' + avgTx + '</td>' +
'<td>' + avgRx + '</td>';
tbody.appendChild(tr);
});
// Auto-select the first (most recent) session
if (sessions.length > 0) {
var firstRow = tbody.querySelector('tr');
if (firstRow) selectSession(sessions[0].id, firstRow);
}
})
.catch(function() {
document.getElementById('sessions-body').innerHTML =
'<tr><td colspan="9" class="no-data">Failed to load sessions.</td></tr>';
});
function selectSession(sessionId, rowEl) {
// Highlight selected row
var rows = document.querySelectorAll('#sessions-body tr');
rows.forEach(function(r) { r.classList.remove('selected'); });
rowEl.classList.add('selected');
document.getElementById('chart-title').textContent = 'Throughput for session #' + sessionId;
document.getElementById('chart-placeholder').style.display = 'none';
fetch('/api/session/' + sessionId + '/intervals')
.then(function(r) { return r.json(); })
.then(function(intervals) {
renderChart(intervals);
})
.catch(function() {
document.getElementById('chart-placeholder').style.display = 'block';
document.getElementById('chart-placeholder').textContent = 'Failed to load interval data.';
});
}
function renderChart(intervals) {
var canvas = document.getElementById('throughput-chart');
if (throughputChart) {
throughputChart.destroy();
}
if (!intervals || intervals.length === 0) {
document.getElementById('chart-placeholder').style.display = 'block';
document.getElementById('chart-placeholder').textContent = 'No interval data available for this session.';
return;
}
var labels = intervals.map(function(d) { return d.second + 's'; });
var txData = intervals.map(function(d) { return (d.tx_bytes * 8 / 1e6).toFixed(2); });
var rxData = intervals.map(function(d) { return (d.rx_bytes * 8 / 1e6).toFixed(2); });
throughputChart = new Chart(canvas, {
type: 'line',
data: {
labels: labels,
datasets: [
{
label: 'TX Mbps',
data: txData,
borderColor: '#f78166',
backgroundColor: 'rgba(247, 129, 102, 0.1)',
borderWidth: 2,
fill: true,
tension: 0.3,
pointRadius: 1
},
{
label: 'RX Mbps',
data: rxData,
borderColor: '#58a6ff',
backgroundColor: 'rgba(88, 166, 255, 0.1)',
borderWidth: 2,
fill: true,
tension: 0.3,
pointRadius: 1
}
]
},
options: {
responsive: true,
maintainAspectRatio: false,
interaction: {
intersect: false,
mode: 'index'
},
scales: {
x: {
title: { display: true, text: 'Time', color: '#8b949e' },
ticks: { color: '#8b949e' },
grid: { color: '#21262d' }
},
y: {
title: { display: true, text: 'Mbps', color: '#8b949e' },
ticks: { color: '#8b949e' },
grid: { color: '#21262d' },
beginAtZero: true
}
},
plugins: {
legend: {
labels: { color: '#e1e4e8' }
},
tooltip: {
backgroundColor: '#161b22',
borderColor: '#30363d',
borderWidth: 1,
titleColor: '#e1e4e8',
bodyColor: '#8b949e'
}
}
}
});
}
</script>
</body>
</html>

View File

@@ -0,0 +1,160 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>btest-rs Public Bandwidth Test Server</title>
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
background: #0f1117;
color: #e1e4e8;
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
}
.container {
max-width: 560px;
width: 90%;
text-align: center;
padding: 2rem;
}
h1 {
font-size: 2rem;
margin-bottom: 0.5rem;
color: #58a6ff;
}
.subtitle {
color: #8b949e;
margin-bottom: 2rem;
line-height: 1.5;
}
.search-box {
display: flex;
gap: 0.5rem;
margin-bottom: 1.5rem;
}
.search-box input {
flex: 1;
padding: 0.75rem 1rem;
border: 1px solid #30363d;
border-radius: 6px;
background: #161b22;
color: #e1e4e8;
font-size: 1rem;
outline: none;
}
.search-box input:focus {
border-color: #58a6ff;
}
.search-box input::placeholder {
color: #484f58;
}
.search-box button {
padding: 0.75rem 1.5rem;
background: #238636;
color: #fff;
border: none;
border-radius: 6px;
font-size: 1rem;
cursor: pointer;
white-space: nowrap;
}
.search-box button:hover {
background: #2ea043;
}
.info {
background: #161b22;
border: 1px solid #30363d;
border-radius: 8px;
padding: 1.5rem;
text-align: left;
line-height: 1.6;
color: #8b949e;
}
.info h3 {
color: #e1e4e8;
margin-bottom: 0.5rem;
}
.info code {
background: #0d1117;
padding: 0.15rem 0.4rem;
border-radius: 4px;
font-size: 0.9em;
color: #58a6ff;
}
.auto-link {
margin-top: 1rem;
font-size: 0.9rem;
}
.auto-link a {
color: #58a6ff;
text-decoration: none;
}
.auto-link a:hover {
text-decoration: underline;
}
.footer {
margin-top: 2rem;
color: #484f58;
font-size: 0.8rem;
}
</style>
</head>
<body>
<div class="container">
<h1>btest-rs</h1>
<p class="subtitle">Public MikroTik Bandwidth Test Server &mdash; view your test results and history.</p>
<form class="search-box" id="ip-form" onsubmit="return goToDashboard()">
<input type="text" id="ip-input" placeholder="Enter your IP address (e.g. 203.0.113.5)" autocomplete="off">
<button type="submit">View Results</button>
</form>
<div class="auto-link" id="auto-detect">
Detecting your IP...
</div>
<div class="info">
<h3>How it works</h3>
<p>
Run a bandwidth test from your MikroTik router targeting this server.
After the test completes, enter your public IP above to see
throughput charts, session history, and aggregate statistics.
</p>
<p style="margin-top: 0.5rem;">
Example: <code>/tool bandwidth-test address=this-server protocol=tcp direction=both</code>
</p>
</div>
<div class="footer">Powered by btest-rs</div>
</div>
<script>
function goToDashboard() {
var ip = document.getElementById('ip-input').value.trim();
if (ip) {
window.location.href = '/dashboard/' + encodeURIComponent(ip);
}
return false;
}
// Auto-detect visitor IP and offer a direct link
fetch('https://api.ipify.org?format=json')
.then(function(r) { return r.json(); })
.then(function(data) {
if (data.ip) {
document.getElementById('ip-input').value = data.ip;
document.getElementById('auto-detect').innerHTML =
'Detected IP: <a href="/dashboard/' + encodeURIComponent(data.ip) + '">' + data.ip + '</a> &mdash; click to view your dashboard';
}
})
.catch(function() {
document.getElementById('auto-detect').textContent = '';
});
</script>
</body>
</html>

154
src/syslog_logger.rs Normal file
View File

@@ -0,0 +1,154 @@
//! Syslog integration for btest-rs server mode.
//!
//! Sends structured log events to a remote syslog server via UDP (RFC 5424).
//! Events: auth success/failure, test start/stop, speed results.
use std::net::UdpSocket;
use std::sync::Mutex;
static SYSLOG: Mutex<Option<SyslogSender>> = Mutex::new(None);
struct SyslogSender {
socket: UdpSocket,
target: String,
hostname: String,
}
/// Initialize the global syslog sender.
/// `target` is the syslog server address, e.g. "192.168.1.1:514".
pub fn init(target: &str) -> std::io::Result<()> {
let socket = UdpSocket::bind("0.0.0.0:0")?;
let hostname = hostname::get()
.map(|h| h.to_string_lossy().to_string())
.unwrap_or_else(|_| "btest-rs".to_string());
let sender = SyslogSender {
socket,
target: target.to_string(),
hostname,
};
*SYSLOG.lock().unwrap() = Some(sender);
tracing::info!("Syslog enabled, sending to {}", target);
Ok(())
}
/// Send a syslog message with the given severity and message.
/// Severity: 6=info, 4=warning, 3=error
fn send(severity: u8, msg: &str) {
let guard = SYSLOG.lock().unwrap();
if let Some(ref sender) = *guard {
// RFC 3164 (BSD syslog): <priority>Mon DD HH:MM:SS hostname program: message
// facility=16 (local0) * 8 + severity
let priority = 128 + severity;
let timestamp = bsd_timestamp();
let syslog_msg = format!(
"<{}>{} {} btest-rs: {}",
priority, timestamp, sender.hostname, msg,
);
let _ = sender.socket.send_to(syslog_msg.as_bytes(), &sender.target);
}
}
fn bsd_timestamp() -> String {
// RFC 3164 format: "Mon DD HH:MM:SS" (no year)
use std::time::{SystemTime, UNIX_EPOCH};
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
// Simple conversion — good enough for syslog
let secs_in_day = 86400u64;
let days = now / secs_in_day;
let time_of_day = now % secs_in_day;
let hours = time_of_day / 3600;
let minutes = (time_of_day % 3600) / 60;
let seconds = time_of_day % 60;
// Day of year calculation (approximate months)
let months = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"];
let days_in_months = [31u64,28,31,30,31,30,31,31,30,31,30,31];
// Days since epoch to year/month/day
let mut y = 1970u64;
let mut remaining = days;
loop {
let leap = if y % 4 == 0 && (y % 100 != 0 || y % 400 == 0) { 366 } else { 365 };
if remaining < leap { break; }
remaining -= leap;
y += 1;
}
let leap = y % 4 == 0 && (y % 100 != 0 || y % 400 == 0);
let mut m = 0usize;
for i in 0..12 {
let mut d = days_in_months[i];
if i == 1 && leap { d += 1; }
if remaining < d { m = i; break; }
remaining -= d;
}
let day = remaining + 1;
format!("{} {:2} {:02}:{:02}:{:02}", months[m], day, hours, minutes, seconds)
}
// --- Public logging functions ---
pub fn auth_success(peer: &str, username: &str, auth_type: &str) {
let msg = format!(
"AUTH_SUCCESS peer={} user={} type={}",
peer, username, auth_type,
);
tracing::info!("{}", msg);
send(6, &msg);
}
pub fn auth_failure(peer: &str, username: &str, auth_type: &str, reason: &str) {
let msg = format!(
"AUTH_FAILURE peer={} user={} type={} reason={}",
peer, username, auth_type, reason,
);
tracing::warn!("{}", msg);
send(4, &msg);
}
pub fn test_start(peer: &str, proto: &str, direction: &str, conn_count: u8) {
let msg = format!(
"TEST_START peer={} proto={} dir={} connections={}",
peer, proto, direction, conn_count.max(1),
);
tracing::info!("{}", msg);
send(6, &msg);
}
pub fn test_end(
peer: &str,
proto: &str,
direction: &str,
total_tx: u64,
total_rx: u64,
total_lost: u64,
duration_secs: u32,
) {
let tx_mbps = if duration_secs > 0 {
total_tx as f64 * 8.0 / duration_secs as f64 / 1_000_000.0
} else {
0.0
};
let rx_mbps = if duration_secs > 0 {
total_rx as f64 * 8.0 / duration_secs as f64 / 1_000_000.0
} else {
0.0
};
let msg = format!(
"TEST_END peer={} proto={} dir={} duration={}s tx_avg={:.2}Mbps rx_avg={:.2}Mbps tx_bytes={} rx_bytes={} lost={}",
peer, proto, direction, duration_secs, tx_mbps, rx_mbps, total_tx, total_rx, total_lost,
);
tracing::info!("{}", msg);
send(6, &msg);
}
/// Check if syslog is enabled.
pub fn is_enabled() -> bool {
SYSLOG.lock().unwrap().is_some()
}

193
tests/ecsrp5_test.rs Normal file
View File

@@ -0,0 +1,193 @@
use std::time::Duration;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;
const SERVER_PORT: u16 = 13000;
async fn start_ecsrp5_server(port: u16) {
tokio::spawn(async move {
let _ = btest_rs::server::run_server(
port,
Some("testuser".into()),
Some("testpass".into()),
true,
Some("127.0.0.1".into()),
None,
)
.await;
});
tokio::time::sleep(Duration::from_millis(200)).await;
}
async fn start_md5_server(port: u16) {
tokio::spawn(async move {
let _ = btest_rs::server::run_server(
port,
Some("testuser".into()),
Some("testpass".into()),
false,
Some("127.0.0.1".into()),
None,
)
.await;
});
tokio::time::sleep(Duration::from_millis(200)).await;
}
async fn start_noauth_server(port: u16) {
tokio::spawn(async move {
let _ = btest_rs::server::run_server(port, None, None, false, Some("127.0.0.1".into()), None).await;
});
tokio::time::sleep(Duration::from_millis(200)).await;
}
#[tokio::test]
async fn test_ecsrp5_server_sends_03_response() {
let port = SERVER_PORT;
start_ecsrp5_server(port).await;
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port))
.await
.unwrap();
// Read HELLO
let mut buf = [0u8; 4];
stream.read_exact(&mut buf).await.unwrap();
assert_eq!(buf, [0x01, 0x00, 0x00, 0x00]);
// Send command (TCP, server TX)
let cmd = btest_rs::protocol::Command::new(
btest_rs::protocol::CMD_PROTO_TCP,
btest_rs::protocol::CMD_DIR_TX,
);
stream.write_all(&cmd.serialize()).await.unwrap();
stream.flush().await.unwrap();
// Should receive EC-SRP5 auth required
stream.read_exact(&mut buf).await.unwrap();
assert_eq!(buf, [0x03, 0x00, 0x00, 0x00], "Expected EC-SRP5 auth response");
}
#[tokio::test]
async fn test_ecsrp5_full_client_auth() {
let port = SERVER_PORT + 1;
start_ecsrp5_server(port).await;
// Use our client with EC-SRP5
let handle = tokio::spawn(async move {
btest_rs::client::run_client(
"127.0.0.1",
port,
btest_rs::protocol::CMD_DIR_TX, // server TX = client RX
false,
0,
0,
Some("testuser".into()),
Some("testpass".into()),
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});
tokio::time::sleep(Duration::from_secs(3)).await;
handle.abort();
// If we got here without panic, EC-SRP5 auth + data transfer worked
}
#[tokio::test]
async fn test_ecsrp5_wrong_password_fails() {
let port = SERVER_PORT + 2;
start_ecsrp5_server(port).await;
let result = btest_rs::client::run_client(
"127.0.0.1",
port,
btest_rs::protocol::CMD_DIR_TX,
false,
0,
0,
Some("testuser".into()),
Some("wrongpass".into()),
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await;
assert!(result.is_err(), "Wrong password should fail");
}
#[tokio::test]
async fn test_md5_auth_still_works() {
let port = SERVER_PORT + 3;
start_md5_server(port).await;
let handle = tokio::spawn(async move {
btest_rs::client::run_client(
"127.0.0.1",
port,
btest_rs::protocol::CMD_DIR_TX,
false,
0,
0,
Some("testuser".into()),
Some("testpass".into()),
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});
tokio::time::sleep(Duration::from_secs(2)).await;
handle.abort();
}
#[tokio::test]
async fn test_noauth_still_works() {
let port = SERVER_PORT + 4;
start_noauth_server(port).await;
let handle = tokio::spawn(async move {
btest_rs::client::run_client(
"127.0.0.1",
port,
btest_rs::protocol::CMD_DIR_TX,
false,
0,
0,
None,
None,
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});
tokio::time::sleep(Duration::from_secs(2)).await;
handle.abort();
}
#[tokio::test]
async fn test_ecsrp5_udp_bidirectional() {
let port = SERVER_PORT + 5;
start_ecsrp5_server(port).await;
let handle = tokio::spawn(async move {
btest_rs::client::run_client(
"127.0.0.1",
port,
btest_rs::protocol::CMD_DIR_BOTH,
true, // UDP
0,
0,
Some("testuser".into()),
Some("testpass".into()),
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});
tokio::time::sleep(Duration::from_secs(3)).await;
handle.abort();
}

View File

@@ -0,0 +1,402 @@
//! Comprehensive integration tests covering all modes, protocols, and output formats.
//! Each test starts a server, runs a client, verifies data flows, and checks CSV/stats.
use std::net::UdpSocket as StdUdpSocket;
use std::sync::atomic::Ordering;
use std::time::Duration;
const BASE_PORT: u16 = 14000;
// --- Helpers ---
async fn start_server(port: u16, ecsrp5: bool) {
let auth_user = Some("testuser".into());
let auth_pass = Some("testpass".into());
tokio::spawn(async move {
let _ = btest_rs::server::run_server(
port, auth_user, auth_pass, ecsrp5,
Some("127.0.0.1".into()), None,
).await;
});
tokio::time::sleep(Duration::from_millis(200)).await;
}
async fn start_server_noauth(port: u16) {
tokio::spawn(async move {
let _ = btest_rs::server::run_server(
port, None, None, false,
Some("127.0.0.1".into()), None,
).await;
});
tokio::time::sleep(Duration::from_millis(200)).await;
}
async fn start_server_v6(port: u16) {
tokio::spawn(async move {
let _ = btest_rs::server::run_server(
port, None, None, false,
None, Some("::1".into()),
).await;
});
tokio::time::sleep(Duration::from_millis(200)).await;
}
async fn run_client_test(
host: &str, port: u16, transmit: bool, receive: bool, udp: bool,
user: Option<&str>, pass: Option<&str>,
) -> (u64, u64, u64, u32) {
let direction = match (transmit, receive) {
(true, false) => btest_rs::protocol::CMD_DIR_RX,
(false, true) => btest_rs::protocol::CMD_DIR_TX,
(true, true) => btest_rs::protocol::CMD_DIR_BOTH,
_ => panic!("must specify direction"),
};
let state = btest_rs::bandwidth::BandwidthState::new();
let state_clone = state.clone();
let host = host.to_string();
let user = user.map(String::from);
let pass = pass.map(String::from);
let handle = tokio::spawn(async move {
btest_rs::client::run_client(
&host, port, direction, udp,
0, 0, user, pass, false, state_clone,
).await
});
tokio::time::sleep(Duration::from_secs(2)).await;
state.running.store(false, Ordering::SeqCst);
tokio::time::sleep(Duration::from_millis(500)).await;
handle.abort();
state.summary()
}
// --- TCP IPv4 Tests ---
#[tokio::test]
async fn test_tcp4_receive() {
let port = BASE_PORT;
start_server_noauth(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("127.0.0.1", port, false, true, false, None, None).await;
assert!(_rx > 0, "TCP4 receive: expected rx > 0, got {}", _rx);
assert!(_intervals > 0, "TCP4 receive: expected intervals > 0");
}
#[tokio::test]
async fn test_tcp4_send() {
let port = BASE_PORT + 1;
start_server_noauth(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("127.0.0.1", port, true, false, false, None, None).await;
assert!(_tx > 0, "TCP4 send: expected tx > 0, got {}", _tx);
assert!(_intervals > 0, "TCP4 send: expected intervals > 0");
}
#[tokio::test]
async fn test_tcp4_both() {
let port = BASE_PORT + 2;
start_server_noauth(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("127.0.0.1", port, true, true, false, None, None).await;
assert!(_tx > 0, "TCP4 both: expected tx > 0, got {}", _tx);
assert!(_rx > 0, "TCP4 both: expected rx > 0, got {}", _rx);
}
// --- UDP IPv4 Tests ---
#[tokio::test]
async fn test_udp4_receive() {
let port = BASE_PORT + 3;
start_server_noauth(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("127.0.0.1", port, false, true, true, None, None).await;
assert!(_rx > 0, "UDP4 receive: expected rx > 0, got {}", _rx);
}
#[tokio::test]
async fn test_udp4_send() {
let port = BASE_PORT + 4;
start_server_noauth(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("127.0.0.1", port, true, false, true, None, None).await;
assert!(_tx > 0, "UDP4 send: expected tx > 0, got {}", _tx);
}
#[tokio::test]
async fn test_udp4_both() {
let port = BASE_PORT + 5;
start_server_noauth(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("127.0.0.1", port, true, true, true, None, None).await;
assert!(_tx > 0, "UDP4 both: expected tx > 0, got {}", _tx);
assert!(_rx > 0, "UDP4 both: expected rx > 0, got {}", _rx);
}
// --- TCP IPv6 Tests ---
#[tokio::test]
async fn test_tcp6_receive() {
let port = BASE_PORT + 6;
start_server_v6(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("::1", port, false, true, false, None, None).await;
assert!(_rx > 0, "TCP6 receive: expected rx > 0, got {}", _rx);
}
#[tokio::test]
async fn test_tcp6_send() {
let port = BASE_PORT + 7;
start_server_v6(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("::1", port, true, false, false, None, None).await;
assert!(_tx > 0, "TCP6 send: expected tx > 0, got {}", _tx);
}
#[tokio::test]
async fn test_tcp6_both() {
let port = BASE_PORT + 8;
start_server_v6(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("::1", port, true, true, false, None, None).await;
assert!(_tx > 0, "TCP6 both: expected tx > 0, got {}", _tx);
assert!(_rx > 0, "TCP6 both: expected rx > 0, got {}", _rx);
}
// --- UDP IPv6 Tests (loopback, no ENOBUFS issues) ---
#[tokio::test]
async fn test_udp6_receive() {
let port = BASE_PORT + 9;
start_server_v6(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("::1", port, false, true, true, None, None).await;
assert!(_rx > 0, "UDP6 receive: expected rx > 0, got {}", _rx);
}
#[tokio::test]
async fn test_udp6_send() {
let port = BASE_PORT + 10;
start_server_v6(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("::1", port, true, false, true, None, None).await;
assert!(_tx > 0, "UDP6 send: expected tx > 0, got {}", _tx);
}
#[tokio::test]
async fn test_udp6_both() {
let port = BASE_PORT + 11;
start_server_v6(port).await;
let (_tx, _rx, _, _intervals) = run_client_test("::1", port, true, true, true, None, None).await;
assert!(_tx > 0, "UDP6 both: expected tx > 0, got {}", _tx);
assert!(_rx > 0, "UDP6 both: expected rx > 0, got {}", _rx);
}
// --- Authentication Tests ---
#[tokio::test]
async fn test_md5_auth_works() {
let port = BASE_PORT + 12;
start_server(port, false).await;
let (_tx, _rx, _, _) = run_client_test(
"127.0.0.1", port, false, true, false,
Some("testuser"), Some("testpass"),
).await;
assert!(_rx > 0, "MD5 auth: expected data flow");
}
#[tokio::test]
async fn test_ecsrp5_auth_works() {
let port = BASE_PORT + 13;
start_server(port, true).await;
let (_tx, _rx, _, _) = run_client_test(
"127.0.0.1", port, false, true, false,
Some("testuser"), Some("testpass"),
).await;
assert!(_rx > 0, "EC-SRP5 auth: expected data flow");
}
#[tokio::test]
async fn test_ecsrp5_wrong_password() {
let port = BASE_PORT + 14;
start_server(port, true).await;
let state = btest_rs::bandwidth::BandwidthState::new();
let result = btest_rs::client::run_client(
"127.0.0.1", port,
btest_rs::protocol::CMD_DIR_TX,
false, 0, 0,
Some("testuser".into()), Some("wrongpass".into()),
false, state,
).await;
assert!(result.is_err(), "Wrong password should fail");
}
// --- CSV Output Tests ---
#[tokio::test]
async fn test_csv_created_client() {
let port = BASE_PORT + 15;
start_server_noauth(port).await;
let csv_path = format!("/tmp/btest_test_csv_{}.csv", port);
let _ = std::fs::remove_file(&csv_path);
// Initialize CSV
btest_rs::csv_output::init(&csv_path).unwrap();
let (tx, rx, lost, _intervals) = run_client_test(
"127.0.0.1", port, false, true, false, None, None,
).await;
// Write result like main.rs does
btest_rs::csv_output::write_result(
"127.0.0.1", port, "TCP", "receive",
2, tx, rx, lost, 0, 0, "none",
);
// Verify CSV exists and has data
let content = std::fs::read_to_string(&csv_path).unwrap();
let lines: Vec<&str> = content.lines().collect();
assert!(lines.len() >= 2, "CSV should have header + at least 1 row, got {} lines", lines.len());
assert!(lines[0].starts_with("timestamp,"), "CSV header missing");
assert!(lines[1].contains("TCP"), "CSV row should contain protocol");
// Check that tx or rx bytes are non-zero (the 7th or 8th CSV field)
let fields: Vec<&str> = lines[1].split(',').collect();
assert!(fields.len() >= 10, "CSV row should have enough fields");
let tx_bytes: u64 = fields[8].parse().unwrap_or(0);
let rx_bytes: u64 = fields[9].parse().unwrap_or(0);
assert!(tx_bytes > 0 || rx_bytes > 0, "CSV should have non-zero bytes: tx={} rx={}", tx_bytes, rx_bytes);
let _ = std::fs::remove_file(&csv_path);
}
#[tokio::test]
async fn test_csv_created_server() {
let port = BASE_PORT + 16;
let csv_path = format!("/tmp/btest_test_server_csv_{}.csv", port);
let _ = std::fs::remove_file(&csv_path);
btest_rs::csv_output::init(&csv_path).unwrap();
start_server_noauth(port).await;
let _ = run_client_test("127.0.0.1", port, false, true, false, None, None).await;
tokio::time::sleep(Duration::from_millis(500)).await;
let content = std::fs::read_to_string(&csv_path).unwrap_or_default();
let lines: Vec<&str> = content.lines().collect();
assert!(lines.len() >= 2, "Server CSV should have header + rows, got {}", lines.len());
let _ = std::fs::remove_file(&csv_path);
}
// --- Syslog Tests ---
#[tokio::test]
async fn test_syslog_emits_events() {
// Bind a local UDP socket to receive syslog messages
let syslog_sock = StdUdpSocket::bind("127.0.0.1:0").unwrap();
let syslog_addr = syslog_sock.local_addr().unwrap();
syslog_sock.set_nonblocking(true).unwrap();
// Initialize syslog to our test socket
btest_rs::syslog_logger::init(&syslog_addr.to_string()).unwrap();
let port = BASE_PORT + 17;
start_server_noauth(port).await;
let _ = run_client_test("127.0.0.1", port, false, true, false, None, None).await;
tokio::time::sleep(Duration::from_millis(500)).await;
// Read all syslog messages
let mut messages = Vec::new();
let mut buf = [0u8; 4096];
loop {
match syslog_sock.recv(&mut buf) {
Ok(n) => messages.push(String::from_utf8_lossy(&buf[..n]).to_string()),
Err(_) => break,
}
}
let all = messages.join("\n");
assert!(all.contains("AUTH_SUCCESS") || all.contains("TEST_START"),
"Syslog should contain auth or test events, got: {}", all);
assert!(all.contains("TEST_START"), "Syslog should contain TEST_START");
assert!(all.contains("TEST_END"), "Syslog should contain TEST_END");
}
// --- Bandwidth State Tests ---
#[tokio::test]
async fn test_bandwidth_state_record_interval() {
let state = btest_rs::bandwidth::BandwidthState::new();
state.record_interval(1000, 2000, 5);
state.record_interval(3000, 4000, 10);
let (tx, rx, lost, intervals) = state.summary();
assert_eq!(tx, 4000);
assert_eq!(rx, 6000);
assert_eq!(lost, 15);
assert_eq!(intervals, 2);
}
#[tokio::test]
async fn test_bandwidth_state_running_flag() {
let state = btest_rs::bandwidth::BandwidthState::new();
assert!(state.running.load(Ordering::Relaxed));
state.running.store(false, Ordering::SeqCst);
assert!(!state.running.load(Ordering::Relaxed));
}
// --- CPU Reporting Tests ---
/// Helper that returns the full BandwidthState (not just summary) so we can check remote_cpu.
async fn run_client_with_state(
host: &str, port: u16, transmit: bool, receive: bool, udp: bool,
secs: u64,
) -> std::sync::Arc<btest_rs::bandwidth::BandwidthState> {
let direction = match (transmit, receive) {
(true, false) => btest_rs::protocol::CMD_DIR_RX,
(false, true) => btest_rs::protocol::CMD_DIR_TX,
(true, true) => btest_rs::protocol::CMD_DIR_BOTH,
_ => panic!("must specify direction"),
};
let state = btest_rs::bandwidth::BandwidthState::new();
let state_clone = state.clone();
let host = host.to_string();
let handle = tokio::spawn(async move {
btest_rs::client::run_client(
&host, port, direction, udp,
0, 0, None, None, false, state_clone,
).await
});
tokio::time::sleep(Duration::from_secs(secs)).await;
state.running.store(false, Ordering::SeqCst);
tokio::time::sleep(Duration::from_millis(500)).await;
handle.abort();
state
}
#[test]
fn test_local_cpu_nonzero() {
// CPU sampler should return > 0 on supported platforms after warming up
btest_rs::cpu::start_sampler();
std::thread::sleep(Duration::from_secs(2));
let cpu = btest_rs::cpu::get();
// On CI or idle machines, CPU may genuinely be 0, so just check it doesn't panic
// and returns a value in range
assert!(cpu <= 100, "CPU should be 0-100, got {}", cpu);
}
#[tokio::test]
async fn test_tcp_remote_cpu_both() {
let port = BASE_PORT + 20;
start_server_noauth(port).await;
let state = run_client_with_state("127.0.0.1", port, true, true, false, 3).await;
let remote_cpu = state.remote_cpu.load(Ordering::Relaxed);
// On loopback with bidirectional traffic, server CPU should be > 0
// The status messages are interleaved in the TCP data stream
assert!(remote_cpu > 0, "TCP BOTH: remote CPU should be > 0 on loopback, got {}", remote_cpu);
}
#[tokio::test]
async fn test_tcp_remote_cpu_tx_only() {
let port = BASE_PORT + 21;
start_server_noauth(port).await;
let state = run_client_with_state("127.0.0.1", port, true, false, false, 3).await;
let remote_cpu = state.remote_cpu.load(Ordering::Relaxed);
// TX-only: server sends status messages that the status reader should parse
assert!(remote_cpu > 0, "TCP TX-only: remote CPU should be > 0 on loopback, got {}", remote_cpu);
}

View File

@@ -8,7 +8,7 @@ async fn start_test_server(port: u16, auth_user: Option<&str>, auth_pass: Option
let user = auth_user.map(String::from);
let pass = auth_pass.map(String::from);
tokio::spawn(async move {
let _ = btest_rs::server::run_server(port, user, pass).await;
let _ = btest_rs::server::run_server(port, user, pass, false, Some("127.0.0.1".into()), None).await;
});
tokio::time::sleep(Duration::from_millis(100)).await;
}
@@ -153,6 +153,7 @@ async fn test_loopback_tcp_rx() {
None,
None,
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});
@@ -177,6 +178,7 @@ async fn test_loopback_tcp_tx() {
None,
None,
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});
@@ -201,6 +203,7 @@ async fn test_loopback_tcp_both() {
None,
None,
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});
@@ -225,6 +228,7 @@ async fn test_loopback_tcp_with_auth() {
Some("admin".into()),
Some("secret".into()),
false,
btest_rs::bandwidth::BandwidthState::new(),
)
.await
});