- max-width 900px → 1600px for analytics page
- Added analytics-grid CSS class for auto-fit columns
- Hash Stats: distribution + timeline side-by-side, adopters + top hops side-by-side
- RF, Topology, Channels already used analytics-row flex layout
- Cards now fill available screen width instead of being squished
Each cell displays the hex prefix (AB, C3, etc). Color meaning:
- Dark: unused prefix (no known nodes)
- Green: 1 node using it (safe, no collision)
- Yellow→Red: 2+ nodes sharing prefix (collision risk)
Legend added below matrix.
Built O(1) prefix index on allNodes (replaces O(n) filter per hop).
Cache disambiguation results by hop sequence key — same path only
resolved once. Subpaths endpoint: ~1s for 19K packets (was 30s+).
Replace naive first-match resolution in /api/analytics/subpaths and
/api/analytics/subpath-detail with shared disambiguateHops() function.
Each path is now resolved with forward+backward nearest-neighbor pass
and distance sanity check, matching live map and resolve-hops logic.
1-byte collision table now shows:
- Max pairwise distance between colliding nodes
- Classification: Local (<50km, true collision), Regional (50-200km,
possible atmospheric), Distant (>200km, internet bridge/MQTT/separate mesh)
- Coordinates for each node
- Legend explaining each classification
- Sorted: distant first (most interesting)
500 limit was cutting off nodes, causing single-candidate false
matches (e.g. Osprey-Base in Seattle winning for prefix 60 when
little russia in SF was beyond the limit). With ~400 nodes having
coords, 2000 ensures all are loaded.
After sequential disambiguation, sanity-check each hop: if it's
>200km (~1.8°) from both neighbors, it's almost certainly a 1-byte
prefix collision with a distant node, not an actual hop. Mark as
unreliable (server) or drop from animation (live map).
Fixes packet 19384 bouncing to Seattle — Spiden Repeater shares
prefix 1D with an unknown local node.
Instead of averaging all hops to a center point, walk the path
sequentially — each ambiguous hop picks the candidate closest to
the previously resolved hop. Forward pass seeds from first known
position, backward pass seeds from observer. This matches physical
reality: packets travel hop-by-hop in RF range.
- ADVERT is payload_type 4 not 1
- Frontend sends observer_id (not just observer_name) since some
observers have null name
- Observer position derived from geographic center of nodes it
commonly receives ADVERTs from
- Last hop in path disambiguated by proximity to observer position
Packets page now passes server-disambiguated full pubkeys.
Map needs bidirectional prefix match since DB nodes may have
truncated (8-char) or full (64-char) keys. Removes redundant
client-side geographic disambiguation since packets page already
resolved via /api/resolve-hops.
- Observer name resolved to lat/lon for geographic context
- Last hop in path sorted by distance to observer (not center)
since it's the node that delivered the packet to the observer
- Packets page now passes observer_name to /api/resolve-hops
- Fixes CroatR1 being picked over Kpa Roof Solar for hop 8A
Was fetching /api/resolve-hops (with geographic disambiguation) then
throwing away the result and still passing raw 1-byte prefixes. Now
passes full pubkeys so the map doesn't re-disambiguate differently.
When the same packet is received multiple times (different hop counts
as it propagates), use the reception with the most hops for display.
Fixes routes showing incomplete paths when the first reception had
fewer hops than later ones.
Applies to both grouped query and individual packet detail endpoint.
When a 1-byte hop prefix matches multiple nodes, pick the one
geographically closest to the center of other known hops in the
path — same logic as /api/resolve-hops but client-side.
Previously just took the first match, which could be a node in
Chicago matching a local Bay Area hop prefix.
No more 5-minute window or 200-packet limit. Fetches up to 10K
packets from the scrub point forward and replays them all.
When replay finishes, resumes live.
- nodeActivity cleared in clearNodeMarkers (removes ghost heatmap)
- When replay finishes, set scrubTs to last packet's timestamp so
updateTimelinePlayhead holds position instead of snapping right
- animatePacket checks for ADVERT type and adds new nodes to map
with marker if they have location data
- Unpause always resumes to live (removed missed packets prompt)
When scrubbing to a past time, only nodes that had advertised before
that time appear on the map. Nodes that hadn't been seen yet are
hidden.
- Added 'before' param to /api/nodes (filters first_seen <= before)
- loadNodes(beforeTs) accepts optional timestamp filter
- clearNodeMarkers() removes all markers and data
- vcrReplayFromTs clears and reloads nodes for replay time
- vcrResumeLive reloads all nodes (no filter)
- Canvas-based 7-segment digit renderer with ghost segments (dim
background showing all segments like a real LCD)
- Clock ticks every second in LIVE mode
- Removed packet counter (1/182) from scrub/replay — only shows
+N PKTS when paused with missed packets
Dark green-on-black LCD display on right side of VCR bar showing:
- Mode: LIVE / PAUSE / PLAY 2x (green)
- Time: HH:MM:SS with glow (green)
- Packet counter: +N PKTS when paused, N/total during replay (amber)
Styled with dark background, inset shadow, monospace font,
text-shadow glow — vintage VCR aesthetic.
ROOT CAUSE: API query used ORDER BY timestamp DESC with no upper
bound — scrubbing to 3h ago fetched the newest 200 packets (from
now), not packets near the scrub target.
Fix:
- Added 'until' query param to /api/packets (both grouped and flat)
- vcrReplayFromTs fetches a 5-minute window around the scrub target
- Server restart required (old process was stale on port 3000)
1. scrubRelease now calls vcrReplayFromTs directly (no pause step)
2. vcrReplayFromTs builds replay buffer from ONLY fetched DB packets,
not merged with stale WS buffer entries. Starts playhead at 0
(first fetched packet), not closest-match in mixed buffer.
Fixes replay starting from session start (00:06) instead of
scrub target.
Scrub + release now pauses (no fetch). Hitting play detects
VCR.scrubTs and fetches packets from DB around that timestamp,
then replays 50 packets from the closest match.
Previous approach tried to fetch packets from DB on scrub release,
causing race conditions, rubber-banding, and stale state. Replaced
with dead-simple approach:
- Drag: moves playhead visually + updates clock
- Release: pauses at that position (VCR.scrubTs holds timestamp)
- No async fetch, no replay on release
- updateTimelinePlayhead uses scrubTs to hold position
- Click LIVE to resume
Red monospace clock next to LIVE/mode indicator shows current
playhead time. Updates during drag, replay ticks, and on mode
changes. Retro VCR aesthetic with text-shadow glow.
VCR.dragging=false was set on mouseup before the async DB fetch
completed. During the fetch (~100-200ms), updateTimelinePlayhead
saw dragging=false, playhead=-1, fell to else→x=cw (right snap).
Fix: keep dragging=true until fetch resolves and replay starts.
The playhead had 'transition: left 0.3s ease' which made it animate
300ms behind the mouse during drag (felt stuck) and ease to an offset
position on release (felt like rubber-banding). Removed.
THE root cause: timeline coordinate system uses Date.now() as right
edge, making it a sliding window. Playhead position = (packetTs -
(now - scope)) / scope — but 'now' advances every frame, sliding
all positions left continuously. Any scrub position drifts.
Fix: VCR.frozenNow captures Date.now() when leaving LIVE mode.
All timeline calculations use frozenNow instead of Date.now() during
REPLAY/PAUSED. Timeline stops sliding. Playhead stays put.
Cleared on return to LIVE.
Root cause traced: mouseup sets VCR.dragging=false and starts async
fetch. Between fetch start and response (~50-200ms), the 30s interval
fires updateTimelinePlayhead() which found no matching branch for
REPLAY+old playhead, defaulting to x=cw (right edge snap).
Fix: dragPct now takes priority over buffer-based position during
REPLAY/PAUSED modes. Cleared only when fetch completes and replay
actually starts with real buffer data.
Root cause: scrub replay was playing through the ENTIRE buffer
from scrub point to end (thousands of packets), racing the playhead
forward to now. Now caps replay at 50 packets from scrub point,
then pauses. Hit LIVE to resume real-time.
Two root causes fixed:
1. startReplay() was calling vcrResumeLive() when buffer exhausted,
snapping back to LIVE mode. Now pauses instead.
2. updateTimelinePlayhead() recalculated position from timestamps
relative to now (which shifts). Now holds at dragPct position
when paused after a scrub.
Root cause: scrubbing moved playhead to closest buffer entry (recent
WS packets only), then replay tick() recalculated position relative
to current time, snapping it back. Fixed by splitting into two phases:
1. scrubVisual: only moves the DOM playhead element during drag
2. scrubCommit: on release, fetches packets from DB around target
timestamp, merges into buffer, seeks to closest entry, starts replay.
No more rubber-banding.
VCR: playhead now stays where you drag it — updateTimelinePlayhead()
skips during drag (VCR.dragging flag). Always update playhead visually
via direct DOM during scrub instead of relying on buffer position calc.
Packets: replay button compact, inline next to View Route button.
Analytics: channel links now navigate to specific channel hash.
Click/drag on timeline now works even when buffer is empty.
Dragging to a point before the buffer's start triggers DB fetch
on release. Removed redundant click handler (drag handles clicks).
Visual playhead feedback during drag even without buffer data.