Adds btest-server-pro: multi-user bandwidth test server with SQLite DB,
per-IP quotas (daily/weekly/monthly), inline byte budget enforcement,
TCP multi-connection support, MD5 auth, web dashboard with Chart.js
graphs, quota progress bars, and JSON export.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Inline byte budget in BandwidthState prevents quota overshoot at any
link speed (TX/RX loops check per-packet, not per-interval)
- TCP multi-connection support for server-pro (session tokens, secondary
connection joins, delegates to standard multi-conn handler)
- MD5 password verification against stored raw passwords in user DB
- Web dashboard: quota progress bars (daily/weekly/monthly), JSON export
endpoint (/api/ip/{ip}/export), quota API (/api/ip/{ip}/quota)
- Landing page with usage instructions, UDP NAT warning, credentials
- Fix IP usage double-counting bug in QuotaManager::record_usage
- UserDb now stores DB path and raw passwords for MD5 auth
- 10 enforcer tests (4 new: budget calc, budget stop, budget exhausted,
unlimited passthrough)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The pro server now runs actual bandwidth tests with concurrent quota
enforcement. Data flows through the standard btest TCP/UDP handlers
while the QuotaEnforcer monitors usage every N seconds.
Public API added to btest_rs::server:
- run_tcp_test(stream, cmd, state) — TCP test with external state
- run_udp_test(stream, peer, cmd, state, port) — UDP with external state
These allow the pro server to share BandwidthState between the test
handlers and the enforcer, enabling mid-session quota termination.
Verified end-to-end:
- Test 1: TCP download at 70 Gbps, ran full duration
- Test 2: TCP upload, KILLED mid-session by enforcer after 3 checks
(user_daily_quota_exceeded at 23.8 GB vs 50 MB limit)
- Test 3: REJECTED at connection time (quota already used up)
64 tests, all passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New server_loop.rs:
- Custom accept loop with pre-connection IP quota check
- DB-based MD5 authentication (verifies user exists + enabled)
- Pre-test user quota check (reject if already exceeded)
- Session tracking in DB (start_session/end_session)
- QuotaEnforcer spawned alongside each test
- Post-test usage recording to both user + IP tables
- Syslog events for auth, quota rejection, test start/end
Full flow:
1. Accept connection → check IP quota → reject if exceeded
2. Handshake + auth → verify user in DB → reject if disabled/not found
3. Check user quota → reject if daily/weekly/monthly exceeded
4. Start session → spawn enforcer (checks every N seconds)
5. Run test → enforcer stops it if quota hit or max_duration reached
6. Record usage → persist to DB → disconnect IP tracker
TODO: Wire actual TX/RX data loops (currently only enforcer runs,
data transfer not yet delegated from pro server to standard handlers)
64 tests, all passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New enforcer.rs module runs alongside active tests:
- Periodic quota checks (default every 10s, configurable --quota-check-interval)
- Max duration enforcement — forcefully stops test after limit
- User quotas: daily/weekly/monthly checked against DB + current session
- IP quotas: daily/weekly/monthly checked against DB + current session
- Flush session bytes to DB for accurate cross-session tracking
- Sets state.running=false to gracefully terminate on quota breach
StopReason enum tracks why a test was stopped:
MaxDuration, UserDailyQuota, UserWeeklyQuota, UserMonthlyQuota,
IpDailyQuota, IpWeeklyQuota, IpMonthlyQuota, ClientDisconnected
Tests (6 new, all passing):
- test_enforcer_max_duration: stops after max_duration seconds
- test_enforcer_client_disconnect: detects normal client exit
- test_enforcer_user_daily_quota_exceeded: stops when user quota hit
- test_enforcer_ip_daily_quota_exceeded: stops when IP quota hit
- test_enforcer_under_quota_runs_normally: doesn't stop if under limits
- test_enforcer_flush_records_usage: verifies DB persistence
64 total tests (58 standard + 6 enforcer), all passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Instead of manually setting up rust + makepkg, install yay first
then `yay -S btest-rs --noconfirm` — exactly how a user would.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
scripts/test-aur-remote.sh: SSHes to a remote x86_64 server, spins up
an Arch Docker container, installs btest-rs from AUR, runs TCP + UDP
loopback tests, and cleans up.
Usage: ./scripts/test-aur-remote.sh root@myserver
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Merges separate .sha256 files (from macOS build) into the main
checksums-sha256.txt, adds missing checksums, deduplicates.
Added macOS to release notes table.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- README: Raspberry Pi install section with auto-detect architecture
- README: pre-built binary download section for all platforms
- Docker docs: dual registry (Gitea + GHCR)
- scripts/push-docker-all.sh: push to both registries in one command
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New in v0.6.0:
- CPU usage: local/remote shown per interval (cpu: 12%/33%)
- Warning indicator (!) when CPU > 70% on either side
- MikroTik CPU encoding: 0x80 | percentage in status byte 1
- CSV includes local_cpu_pct and remote_cpu_pct columns
- Status message format corrected to match MikroTik wire format:
[type:1][cpu:1][00:2][seq:4 LE][bytes:4 LE]
- Removed btest-opensource submodule (fully reimplemented)
- Deleted research/ecsrp5 branch
- Updated all docs: architecture, user-guide, man page, protocol
- Version bumped to 0.6.0
58 tests, all passing. Zero warnings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- MikroTik encodes CPU as 0x80 | percentage (high bit flag)
- Deserialize: mask with 0x7F and cap at 100
- Serialize: set high bit (0x80 | cpu) to match MikroTik format
- CSV now includes local_cpu_pct and remote_cpu_pct columns
- Both client and server write CPU to CSV
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CPU usage feature:
- New cpu.rs module: background sampler thread, cross-platform (macOS + Linux)
- Status message byte 1 now carries CPU load (0-100%), matching MikroTik format
- Status format corrected: [type][cpu][00][00][seq:4 LE][bytes:4 LE]
- Client and server exchange CPU in every status message
- Display format: "cpu: 40%/12%" (local/remote), "!" warning if > 70%
- Both client and server show local + remote CPU per interval
- Syslog TEST_END could include CPU averages (future enhancement)
Removed btest-opensource submodule — we've fully reimplemented the protocol
with EC-SRP5 auth, multi-connection, IPv6, syslog, CSV, and CPU monitoring.
The original project is still credited in LICENSE and README.
58 tests, all passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Server now writes a CSV row for each completed test with peer IP,
protocol, direction, duration, avg speeds, bytes, and lost packets.
Verified on loopback: TX 35 Gbps, RX 51 Gbps captured in CSV.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The state is now created in main.rs and passed into run_client, so
when --duration timeout cancels the future, the stats are still
accessible via shared_state.summary(). CSV and syslog now show
real speeds and byte counts.
Verified: TCP loopback shows 32 Gbps in CSV output.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
run_client and sub-functions now return (tx_bytes, rx_bytes, lost, intervals).
BandwidthState::record_interval() called in both TCP and UDP client status
loops. CSV and syslog TEST_END now show real speeds and byte counts.
Also raised client UDP TX error threshold from 1000 to 50000 with
adaptive backoff matching the server.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Client mode now emits TEST_START and TEST_END syslog events
- Client UDP TX threshold raised from 1000 to 50000 with adaptive backoff
(matching server behavior) — prevents premature TX death on macOS
- Updated all docs (README, user-guide, architecture, protocol, docker)
- Added results.csv to gitignore
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
TEST_END now includes: duration, avg TX/RX Mbps, total bytes, lost packets.
All test functions track cumulative totals via BandwidthState::record_interval()
and return summary stats.
Example:
TEST_END peer=172.16.81.1:59070 proto=UDP dir=TX duration=6s
tx_avg=275.00Mbps rx_avg=0.00Mbps tx_bytes=206250000 rx_bytes=0 lost=0
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With flags(no-parse) on the source, syslog-ng doesn't extract
the program name. Use match("btest-rs:" value("MESSAGE")) instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Syslog now uses RFC 3164 (BSD) format with proper timestamps
and facility=local0 for easy filtering
- Added deploy/syslog-ng-btest.conf with filters for:
- All btest events (all.log + daily rotation)
- Auth events only (auth.log)
- Test events only (tests.log)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
IPv6 listener now requires explicit --listen6 flag (disabled by default).
TCP over IPv6 works fully. UDP over IPv6 has macOS kernel limitations
(ENOBUFS on send_to). On Linux, IPv6 UDP works fine.
Usage:
btest -s # IPv4 only (default)
btest -s --listen6 # IPv4 + IPv6 on ::
btest -s --listen6 ::1 # IPv4 + IPv6 on specific address
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reporting tx_bytes in TX-only mode caused MikroTik to show speed on
the wrong side (Tx instead of Rx). MikroTik tracks its own Rx by
counting UDP arrivals — the status bytes_received is for the OTHER
direction (how much we received from the client).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
In TX-only mode (MikroTik receives), we sent rx_bytes=0 in status
because we weren't receiving anything. But MikroTik client needs
to see non-zero bytes in the status to know data is flowing.
Now report tx_bytes when in TX-only mode.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
pcap analysis proved: connected send() achieves 462k pps on IPv6,
while unconnected send_to() hits ENOBUFS at 5k pps then stalls.
Reverted the "always unconnected for IPv6" workaround. Now only
multi-connection mode uses unconnected sockets. Single-connection
always connects, which works for both IPv4 and IPv6 TX and RX.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ENOBUFS hits every send on macOS IPv6 because the interface output queue
is full. The adaptive backoff never recovered because consecutive_errors
never reset. Now reset after sleeping, and yield more frequently (every
16 packets instead of 64).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
macOS returns ENOBUFS on IPv6 send_to() until NDP neighbor resolution
completes. Send a 1-byte probe packet and wait 200ms for NDP to resolve
before starting the data blast.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The into_std/from_std conversion lost the buffer settings. Now create
the raw socket with socket2 first, set SO_SNDBUF/SO_RCVBUF to 4MB,
then wrap with tokio. Also logs actual buffer sizes for debugging.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
macOS IPv6 UDP sockets have tiny default send buffers, causing
immediate ENOBUFS on every send_to(). Set SO_SNDBUF and SO_RCVBUF
to 4MB using socket2, matching what works for high-throughput IPv4.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Same fix as server side — format!("{}:{}", ipv6, port) fails.
Use SocketAddr::new() for IPv6 and bind to [::] instead of 0.0.0.0.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
IPv6 UDP sends hit ENOBUFS much faster than IPv4 (smaller kernel
buffers, NDP overhead). Fixed:
- Adaptive backoff: 200us→10ms as errors accumulate, resets on success
- Higher error threshold: 50k instead of 1k before stopping
- Yield with sleep when errors have been seen recently
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
macOS connected IPv6 UDP sockets don't receive properly.
Use unconnected socket (send_to/recv_from) for IPv6 peers,
same as multi-connection mode.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
format!("{}:{}", ipv6_addr, port) produces invalid socket address.
Use SocketAddr::new() instead. Also bind UDP to [::] for IPv6 peers
and 0.0.0.0 for IPv4 peers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Server now binds on both IPv4 (0.0.0.0) and IPv6 (::) by default.
Uses tokio::select! to accept from whichever listener has a connection.
New flags:
--listen <addr> IPv4 listen address (default: 0.0.0.0, "none" to disable)
--listen6 <addr> IPv6 listen address (default: ::, "none" to disable)
Examples:
btest -s # listen on both v4 and v6
btest -s --listen6 none # IPv4 only
btest -s --listen none # IPv6 only
btest -s --listen 192.168.1.1 # specific IPv4 address
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New features:
- --syslog <address:port> sends structured events to remote syslog (RFC 5424 UDP)
Events: AUTH_SUCCESS, AUTH_FAILURE, TEST_START, TEST_END, TEST_RESULT
- EC-SRP5 authentication for both client and server modes
- TCP multi-connection support (session tokens, all 3 directions)
Bug fixes since v0.2.0:
- EC-SRP5 server: fixed gamma parity (was 50% auth failure rate)
- EC-SRP5 server: use lift_x not redp1 for verification
- TCP send direction: server sends 12-byte status messages to client
- TCP both direction: TX loop injects status between data packets
- TCP data: send all zeros (no 0x07 header that MikroTik rejected)
- TCP disconnect detection: running flag set on EOF
- UDP multi-connection: unconnected socket accepts all source ports
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
In BOTH direction, the TX loop now injects 12-byte status messages
every 1 second between data packets, reporting rx_bytes to the client.
Multi-connection mode also updated with same logic for all 3 cases:
- TX only: pure data
- RX only: status sender on writer
- BOTH: TX data + interleaved status messages
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The status sender and status_report_loop were BOTH calling swap(0)
on rx_bytes, racing each other. Now the status sender owns the swap
and prints stats itself. The report loop is skipped in RX-only TCP mode.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Was sending cumulative rx_bytes total which zigzagged because
MikroTik interprets the value as per-interval bandwidth.
Now tracks last_rx and sends (current - last) delta each second.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
pcap of MikroTik-as-server showed it sends periodic 12-byte status
messages back to the client even in RX-only mode. The client needs
these to display speed. Added tcp_status_sender that writes status
messages containing rx_bytes on the TCP write half every 1 second.
Reverted the "always bidirectional" change — TCP direction is
conditional, but RX mode now uses the writer for status instead
of keeping it idle.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>