fix: forward media to ALL connected peers, not just those with room active
Some checks failed
Mirror to GitHub / mirror (push) Failing after 38s
Build Release Binaries / build-amd64 (push) Failing after 2m14s

The bug: when a local client joins a global room and sends media, the
egress task checked peer_links.active_rooms to decide where to forward.
But active_rooms tracks what PEERS announced (their rooms), not what
WE announced. So our own GlobalRoomActive signal went out but our
peer_links had empty active_rooms — media was dropped.

Fix: for locally-originated media, send to ALL connected federation
peers unconditionally. The receiving relay decides whether to deliver
to local participants (if it has the room) or forward further. This
is correct because federation peers are explicitly configured — if
they're connected, they should receive global room media.

Multi-hop forwarding (handle_datagram) still filters by active_rooms
to prevent loops — only forwards to peers that announced the room.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Siavash Sameni
2026-04-08 10:09:50 +04:00
parent af4c89f5f0
commit f4cc3b1a6b

View File

@@ -138,7 +138,9 @@ impl FederationManager {
}
}
/// Forward locally-generated media to active peers for a global room.
/// Forward locally-generated media to all connected peers.
/// For locally-originated media, we send to ALL peers (they decide whether to deliver).
/// For forwarded media (multi-hop), handle_datagram filters by active_rooms.
pub async fn forward_to_peers(&self, room_name: &str, room_hash: &[u8; 8], media_data: &Bytes) {
let links = self.peer_links.lock().await;
if links.is_empty() {
@@ -146,7 +148,9 @@ impl FederationManager {
}
let mut sent = 0u32;
for (fp, link) in links.iter() {
if link.active_rooms.contains(room_name) {
// Send to all connected peers — they have the global room configured
// and will deliver to local participants or forward further
{
let mut tagged = Vec::with_capacity(8 + media_data.len());
tagged.extend_from_slice(room_hash);
tagged.extend_from_slice(media_data);
@@ -156,13 +160,6 @@ impl FederationManager {
}
}
}
if sent == 0 && !links.is_empty() {
// Debug: no peer had this room active
let active_rooms: Vec<_> = links.values()
.flat_map(|l| l.active_rooms.iter().cloned())
.collect();
warn!(room = %room_name, peer_count = links.len(), ?active_rooms, "no peer has this room active");
}
}
// ── Trust verification (kept from previous implementation) ──