## Summary
Fixes#388 — expanded groups were fetched sequentially with O(n)
`packets.find()` lookups.
## Changes
1. **Parallel fetch**: Replaced sequential `for...of + await` loop in
`loadPackets()` with `Promise.all()` so all expanded group children are
fetched concurrently.
2. **O(1) Map lookup**: Replaced 3 instances of `packets.find(p =>
p.hash === hash)` with `hashIndex.get(hash)`:
- `loadPackets()` expanded group restore (~line 553)
- `select-observation` click handler (~line 1015)
- `pktToggleGroup()` (~line 2012)
## Perf justification
- **Before**: N expanded groups → N sequential API calls + N ×
O(packets.length) array scans
- **After**: N parallel API calls + N × O(1) Map lookups
- Typical N is 1-3 (minor severity as noted in issue), but the fix is
trivial and correct
## Tests
All existing tests pass: `test-packet-filter.js` (62), `test-aging.js`
(29), `test-frontend-helpers.js` (433).
Co-authored-by: you <you@example.com>
## Summary
Fixes#410 — virtual scroll height miscalculation for expanded group
rows.
## Root Cause
When WebSocket messages add children to an already-expanded packet
group, `_rowCounts` becomes stale during the 200ms render debounce
window. Scroll events during this window call `renderVisibleRows()` with
stale row counts, causing wrong total height, spacer heights, and
visible range calculations.
## Changes
**public/packets.js:**
- Added `_rowCountsDirty` flag to track when row counts need
recomputation
- Added `_invalidateRowCounts()` — marks row counts as stale and clears
cumulative cache
- Added `_refreshRowCountsIfDirty()` — lazily recomputes `_rowCounts`
from `_displayPackets`
- Called `_invalidateRowCounts()` when WS handler adds children to
expanded groups (line ~402)
- Called `_refreshRowCountsIfDirty()` at top of `renderVisibleRows()`
before using row counts
- Reset `_rowCountsDirty` in all cleanup paths (destroy, empty display)
**test-packets.js:**
- Added 4 regression tests for `_invalidateRowCounts` /
`_refreshRowCountsIfDirty`
## Complexity
O(n) recomputation of `_rowCounts` when dirty (same as existing
`renderTableRows` path). Only triggers when WS modifies expanded group
children, which is infrequent relative to scroll events.
Co-authored-by: you <you@example.com>
Fixes#537
## Problem
Observer filter in grouped mode only checked `p.observer_id` (the
primary observer), ignoring child observations. Grouped packets seen by
multiple observers would be hidden when filtering for a non-primary
observer.
## Fix
Two filter paths updated to also check `p._children`:
1. **Client-side display filter** (line ~1293): removed the
`!groupByHash` guard and added `_children` check so grouped packets are
included when any child observation matches
2. **WS real-time filter** (line ~360): added `_children` fallback check
The grouped row rendering (line ~1042) already correctly uses
`_observerFilterSet` for child filtering — no changes needed there.
## Tests
Added 5 tests in `test-frontend-helpers.js`:
- Grouped packet with matching child observer is shown
- Grouped packet with no matching observers is hidden
- WS filter passes/rejects grouped packets correctly
- Source code assertions verifying both filter paths check `_children`
Co-authored-by: you <you@example.com>
## Summary
- When `groupByHash=true`, each group only carries its representative
(best-path) `observer_id`. The client-side filter was checking only that
field, silently dropping groups that were seen by the selected observer
but had a different representative.
- `loadPackets` now passes the `observer` param to the server so
`filterPackets`/`buildGroupedWhere` do the correct "any observation
matches" check.
- Client-side observer filter in `renderTableRows` is skipped for
grouped mode (server already filtered correctly).
- Both `db.go` and `store.go` observer filtering extended to support
comma-separated IDs (multi-select UI).
## Test plan
- [ ] Set an observer filter on the Packets screen with grouping enabled
— all groups that have **any** observation from the selected observer(s)
should appear, not just groups where that observer is the representative
- [ ] Multi-select two observers — groups seen by either should appear
- [ ] Toggle to flat (ungrouped) mode — per-observation filter still
works correctly
- [ ] Existing grouped packets tests pass: `cd cmd/server && go test
./...`
Fixes#464🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: you <you@example.com>
## Summary
Fixes#504 — Expanding a packet in the packets UI showed the same path
on every observation instead of each observation's unique path.
## Root Cause
PR #400 (fixing #387) added caching of `JSON.parse` results as
`_parsedPath` and `_parsedDecoded` properties on packet objects. When
observation packets are created via object spread (`{...parentPacket,
...obs}`), these cache properties are copied from the parent. Subsequent
calls to `getParsedPath(obsPacket)` hit the stale cache and return the
parent's path, ignoring the observation's own `path_json`.
## Fix
After every object spread that creates an observation packet from a
parent packet, delete the cache properties so they get re-parsed from
the observation's own data:
```js
delete obsPacket._parsedPath;
delete obsPacket._parsedDecoded;
```
Applied to all 5 spread sites in `public/packets.js`:
- Line 271: detail pane observation selection
- Line 504: flat view observation expansion
- Line 840: grouped view observation expansion
- Line 1012: child observation selection in grouped view
- Line 1982: WebSocket live update observation expansion
## Tests
Added 2 new tests in `test-frontend-helpers.js`:
1. Verifies observation packets get their own path after cache
invalidation (not the parent's)
2. Verifies observation path differs from parent path after cache
invalidation
All 431 frontend helper tests pass. All 62 packet filter tests pass.
---------
Co-authored-by: you <you@example.com>
## Problem
As described in #387, `JSON.parse()` is called repeatedly on the same
packet data across render cycles. With 30K packets, each render cycle
parses 60K+ JSON strings unnecessarily.
## Analysis
The server sends `decoded_json` and `path_json` as JSON strings. The
frontend parses them on-demand in multiple locations:
- `renderTableRows()` — for every row, every render
- WebSocket handling — when processing filtered packets
- `loadPackets()` — during packet loading
- Detail view rendering — when showing packet details
This creates O(n×m) parsing overhead where n = packet count and m =
render cycles.
## Solution
Add cached parse helpers that store parsed results on the packet object:
```javascript
function getParsedPath(p) {
if (p._parsedPath === undefined) {
try { p._parsedPath = JSON.parse(p.path_json || '[]'); } catch { p._parsedPath = []; }
}
return p._parsedPath;
}
```
Same pattern for `getParsedDecoded()`.
## Changes
- `public/packets.js`: Add helpers + replace 15+ JSON.parse calls
- `public/live.js`: Add helpers + replace 5 JSON.parse calls
## Benchmarks
Before: 60K+ JSON.parse calls per render cycle (30K packets)
After: ~30K parse calls (one per packet, cached thereafter)
Memory impact: Negligible (stores parsed objects that were already
created temporarily)
## Notes
- Cache uses `undefined` check to distinguish "not cached" from "cached
empty result"
- Property names `_parsedPath` and `_parsedDecoded` prefixed to avoid
collision with server fields
- No breaking changes to existing code paths
Fixes#387
---------
Co-authored-by: P. Clawmogorov <262173731+Alm0stSurely@users.noreply.github.com>
Co-authored-by: you <you@example.com>
## Summary
- `BuildBreakdown` was never ported from the deleted Node.js
`decoder.js` to Go — the server has returned `breakdown: {}` since the
Go migration (commit `742ed865`), so `createColoredHexDump()` and
`buildHexLegend()` in the frontend always received an empty `ranges`
array and rendered everything as monochrome
- Implemented `BuildBreakdown()` in `decoder.go` — computes labeled byte
ranges matching the frontend's `LABEL_CLASS` map: `Header`, `Transport
Codes`, `Path Length`, `Path`, `Payload`; ADVERT packets get sub-ranges:
`PubKey`, `Timestamp`, `Signature`, `Flags`, `Latitude`, `Longitude`,
`Name`
- Wired into `handlePacketDetail` (was `struct{}{}`)
- Also adds per-section color classes to the field breakdown table
(`section-header`, `section-transport`, `section-path`,
`section-payload`) so the table rows get matching background tints
## Test plan
- [x] Open any packet detail pane — hex dump should show color-coded
sections (red header, orange path length, blue transport codes, green
path hops, yellow/colored payload)
- [x] Legend below action buttons should appear with color swatches
- [x] ADVERT packets: PubKey/Timestamp/Signature/Flags each get their
own distinct color
- [x] Field breakdown table section header rows should be tinted per
section
- [x] 8 new Go tests: all pass
Closes#329🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Problem
On a long-running session the packets page consumed 8 GB of browser
memory and 20%+ CPU on an 8-core machine. Root causes:
1. **Unbounded `packets` array growth via WebSocket** —
`packets.unshift()` was called for every new unique hash, but nothing
ever trimmed the array. After hours of live traffic the array grew well
past the initial 50 k load limit.
2. **Unbounded `pauseBuffer`** — all WS messages queued while paused, no
cap.
3. **Unbounded `_children` growth** — expanded groups received a
`unshift(p)` on every matching WS message with no size limit.
4. **O(n) `observers.find()` inside the O(n) render loop** — with 50 k
rows, each render triggered up to 50 k linear scans through the
observers list.
5. **Full DOM rebuild on every WS message** — `renderTableRows()` was
called synchronously on every WebSocket batch, reconstructing the entire
table on each incoming packet.
## Changes
- `packets[]` is now trimmed to `PACKET_LIMIT` after each WS batch;
evicted entries are also removed from `hashIndex` to prevent stale
references.
- `pauseBuffer` capped at 2 000 entries (oldest dropped).
- `_children` capped at 200 entries on WS prepend.
- `renderTableRows()` on the WS path is debounced to 200 ms, batching
rapid updates into a single redraw.
- `observersById = new Map()` pre-built from the observers array; all
`observers.find()` calls in the render loop and WS filter replaced with
O(1) `Map.get()`.
## Test plan
- [x] Load the packets page and leave it running for several minutes
with live WebSocket traffic — memory in DevTools should remain stable
rather than growing continuously
- [x] Pause live updates, wait for several messages, then resume —
buffer replays correctly and display updates
- [x] Expand a packet group and leave it open during live traffic —
children update but don't grow past 200
- [x] Region filter still works correctly (relies on the observer Map
lookup)
- [x] Observer name / IATA badge renders correctly in grouped and flat
mode
🤖 Generated with [Claude Code](https://claude.com/claude-code)
## Summary
Removes an unreachable duplicate `return offsets;` statement in the
`_cumulativeRowOffsets()` function in `packets.js`. The second return
was dead code found during review of PR #402.
## Changes
- **`public/packets.js`**: Removed the duplicate `return offsets;` on
what was line 1137 (the line immediately after the first, reachable
`return offsets;`)
- **`public/index.html`**: Cache buster bump
## Testing
This is a dead code removal — the duplicate return was unreachable. No
behavior change. No new tests needed as existing tests already cover
`_cumulativeRowOffsets()` behavior.
Fixes#447
Co-authored-by: you <you@example.com>
## Summary
Replace all `observers.find()` linear scans in `packets.js` with O(1)
`Map.get()` lookups, eliminating ~300K comparisons per render cycle at
30K+ rows.
## Changes
- Added `observerMap` (`Map<id, observer>`) built once when observers
load
- Replaced all 6 `observers.find()` call sites with `observerMap.get()`:
- `obsName()` — called per row for observer name display
- Region filter check in packet filtering
- Observer dropdown label in filter UI
- Group header region lookup
- Child row region lookup
- Flat row region lookup
- Map is cleared on reset and rebuilt on each `loadObservers()` call
## Complexity
- **Before:** O(k) per row × 30K rows = O(30K × k) where k = observer
count (~10)
- **After:** O(1) per row × 30K rows = O(30K)
- Map construction: O(k) once, negligible
## Testing
- All Go tests pass (`cmd/server`, `cmd/ingestor`)
- All frontend tests pass (`test-packet-filter.js`: 62 passed,
`test-aging.js`: 29 passed, `test-frontend-helpers.js`: 241 passed)
Fixes#383
Co-authored-by: you <you@example.com>
## Summary
Fixes#451 — packet detail pane crash on direct routed packets where
`pathHops` is `null`.
## Root Cause
`JSON.parse(pkt.path_json)` can return literal `null` when the DB stores
`"null"` for direct routed packets. The existing code only had a catch
block for parse errors, but `null` is valid JSON — so the parse succeeds
and `pathHops` ends up `null` instead of `[]`.
## Changes
- **`public/packets.js`**: Added `|| []` after `JSON.parse(...)` in both
`buildFlatRowHtml` (table rows) and the detail pane (`selectPacket`),
ensuring `pathHops` is always an array.
- **`test-frontend-helpers.js`**: Added 2 regression tests verifying the
null guards exist in both code paths.
- **`public/index.html`**: Cache buster bump.
## Testing
- All 229 frontend helper tests pass
- All 62 packet filter tests pass
- All 29 aging tests pass
Co-authored-by: you <you@example.com>
## Summary
Fixes the critical performance issue where `renderTableRows()` rebuilt
the **entire** table innerHTML (up to 50K rows) on every update —
WebSocket arrivals, filter changes, group expand/collapse, and theme
refreshes.
## Changes
### Lazy Row Generation (`renderVisibleRows`) — fixes#422
- Row HTML strings are **only generated for the visible slice + 30-row
buffer** on each render
- `_displayPackets` stores the filtered data array;
`renderVisibleRows()` calls `buildGroupRowHtml`/`buildFlatRowHtml`
lazily for ~60-90 visible entries
- Previously, `displayPackets.map(buildGroupRowHtml)` built HTML for ALL
30K+ packets on every render — the expensive work (JSON.parse, observer
lookups, template literals) ran for every packet regardless of
visibility
### Unified Row Count via `_getRowCount()` — fixes#424
- Single function `_getRowCount(p)` computes DOM row count for any entry
(1 for flat/collapsed, 1+children for expanded groups)
- Used by BOTH `_rowCounts` computation AND `renderVisibleRows` —
eliminates divergence risk between row counting and row building
### Hoisted Observer Filter Set — fixes#427
- `_observerFilterSet` created once in `renderTableRows()`, reused
across `buildGroupRowHtml`, `_getRowCount`, and child filtering
- Previously, `new Set(filters.observer.split(','))` was created inside
`buildGroupRowHtml` for every packet AND again in the row count callback
### Dynamic Colspan — fixes#426
- `_getColCount()` reads column count from the thead instead of
hardcoded `colspan="11"`
- Spacers and empty-state messages use the actual column count
### Null-Safety in `buildFlatRowHtml` — fixes#430
- `p.decoded_json || '{}'` fallback added, matching
`buildGroupRowHtml`'s existing null-safety
- Prevents TypeError on null/undefined `decoded_json` in flat
(ungrouped) mode
### Behavioral Tests — fixes#428
- Replaced 5 source-grep tests with behavioral unit tests for
`_getRowCount`:
- Flat mode always returns 1
- Collapsed group returns 1
- Expanded group returns 1 + child count
- Observer filter correctly reduces child count
- Null `_children` handled gracefully
- Retained source-level assertions only where behavioral testing isn't
practical (e.g., verifying lazy generation pattern exists)
### Other Improvements
- Cumulative row offsets cached in `_cumulativeOffsetsCache`,
invalidated on row count changes
- Debounced WebSocket renders (200ms) coalesce rapid packet arrivals
- `destroy()` properly cleans up all virtual scroll state
## Performance Benchmarks — fixes#423
**Methodology:** Row building cost measured by counting
`buildGroupRowHtml` calls per render cycle on 30K grouped packets.
| Scenario | Before (eager) | After (lazy) | Improvement |
|----------|----------------|--------------|-------------|
| Initial render (30K packets) | 30,000 `buildGroupRowHtml` calls | ~90
calls (60 visible + 30 buffer) | **333× fewer calls** |
| Scroll event | 0 calls (pre-built) | ~90 calls (rebuild visible slice)
| Trades O(1) scroll for O(n) initial savings |
| WS packet arrival | 30,000 calls (full rebuild) | ~90 calls (debounced
+ lazy) | **333× fewer calls** |
| Filter change | 30,000 calls | ~90 calls | **333× fewer calls** |
| Memory (row HTML cache) | ~2MB string array for 30K packets | 0 (no
cache, build on demand) | **~2MB saved** |
**Per-call cost of `buildGroupRowHtml`:** Each call performs JSON.parse
of `decoded_json`, `path_json`, `observers.find()` lookup, and template
literal construction. At 30K packets, the eager approach spent
~400-500ms on row building alone (measured via `performance.now()` on
staging data). The lazy approach builds ~90 rows in ~1-2ms.
**Net effect:** `renderTableRows()` goes from O(n) string building +
O(1) DOM insertion to O(1) data assignment + O(visible) string building
+ O(visible) DOM insertion. For n=30K and visible≈60, this is ~333× less
work per render cycle.
**Trade-off:** Scrolling now rebuilds ~90 rows per RAF frame instead of
slicing pre-built strings. This costs ~1-2ms per scroll event, well
within the 16ms frame budget. The trade-off is overwhelmingly positive
since renders happen far more frequently than full-table scrolls.
## Tests
- 247 frontend helper tests pass (including 18 virtual scroll tests)
- 62 packet filter tests pass
- 29 aging tests pass
- Go backend tests pass
## Remaining Debt (tracked in issues)
- #425: Hardcoded `VSCROLL_ROW_HEIGHT=36` and `theadHeight=40` — should
be measured from DOM
- #429: 200ms WS debounce delay — value works well in practice but lacks
formal justification
- #431: No scroll position preservation on filter change or group
expand/collapse
Fixes#380
---------
Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
## Summary
Fixes#381 — The "My Nodes" filter in `packets.js` was making a **server
API call inside `renderTableRows()`** on every render cycle. With
WebSocket updates arriving every few seconds while the toggle was
active, this created continuous unnecessary server load.
## What Changed
**`public/packets.js`** — Replaced the `api('/packets?nodes=...')`
server call with a pure client-side filter:
```js
// Before: server round-trip on every render
const myData = await api('/packets?nodes=' + allKeys.join(',') + '&limit=500');
displayPackets = myData.packets || [];
// After: filter already-loaded packets client-side
displayPackets = displayPackets.filter(p => {
const dj = p.decoded_json || '';
return allKeys.some(k => dj.includes(k));
});
```
This uses the exact same matching logic as the server's
`QueryMultiNodePackets()` — a string contains check on `decoded_json`
for each pubkey — but without the network round-trip.
**`test-frontend-helpers.js`** — Added 5 unit tests for the filter
logic:
- Single and multiple pubkey matching
- No matches / empty keys edge case
- Null/empty `decoded_json` handled gracefully
**`public/index.html`** — Cache busters bumped.
## Test Results
- Frontend helpers: **232 passed, 0 failed** (including 5 new tests)
- Packet filter: **62 passed, 0 failed**
- Aging: **29 passed, 0 failed**
Co-authored-by: you <you@example.com>
WS broadcast pushes all packets regardless of the selected time
window filter. This caused old packets to appear in the table even
when the API correctly returned zero results for the time range.
Add time window check to the WS packet filter — drops packets
with timestamps older than the selected window cutoff.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The outer IIFE declares const isMobile (line 27, from #340) and
renderLeft() declares its own const isMobile (line 821, pre-existing).
JavaScript hoists const declarations within the function scope, so
referencing isMobile at line 574 (inside renderLeft but before line 821)
throws 'Cannot access isMobile before initialization'.
Rename the inner declaration to isNarrow since it uses a different
breakpoint (640px for column hiding vs 1024px for packet limit).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
## Summary
Fixes#326 — the packets page crashes mobile browsers (iOS Safari, Edge)
by loading 50K+ packets when no time filter is persisted in
localStorage.
## Root Cause
Two problems in public/packets.js:
### Bug 1: savedTimeWindowMin defaults to 0 instead of 15
localStorage.getItem('meshcore-time-window') returns
ull when never set. Number(null) = 0. The guard checked < 0 but not <=
0, so savedTimeWindowMin = 0 meant "All time" — fetching all 50K+
packets.
**Fix:** Changed < 0 to <= 0 in both the initialization guard (line 30)
and the change handler (line 758).
### Bug 2: No mobile protection against large packet loads
Even with valid large time windows, mobile browsers crash under the
weight of thousands of DOM rows and packet data (~1.4 GB WebKit memory
limit).
**Fix:**
- Detect mobile viewport: window.innerWidth <= 768
- Cap limit at 1000 on mobile (vs 50000 on desktop)
- Disable 6h/12h/24h options and hide "All time" on mobile
- Reset persisted windows >3h to 15 min on mobile
## Testing
Added 9 unit tests in est-frontend-helpers.js covering:
- savedTimeWindowMin defaults to 15 when localStorage returns null
- savedTimeWindowMin defaults to 15 when localStorage returns "0"
- Valid values (60) are preserved
- Negative and NaN values default to 15
- PACKET_LIMIT is 1000 on mobile, 50000 on desktop
- Mobile caps large time windows (1440 → 15) but allows 180
All 218 frontend helper tests pass. Packet filter (62) and aging (29)
tests also pass.
## Changes
| File | Change |
|------|--------|
| public/packets.js | Fix <= 0 guard, add mobile detection, cap limit,
restrict time options |
| public/index.html | Cache buster bump |
| est-frontend-helpers.js | 9 new regression tests for time window
defaults and mobile caps |
---------
Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
## Summary
Surfaces transport route types in the packets view by adding a **"T"
badge** next to the payload type badge for packets with
`TRANSPORT_FLOOD` (route type 0) or `TRANSPORT_DIRECT` (route type 3)
routes.
This helps mesh analysis — communities can quickly identify transported
packets and gain insights into scope usage adoption.
Closes#241
## What Changed
### Frontend (`public/`)
- **app.js**: Added `isTransportRoute(rt)` and `transportBadge(rt)`
helper functions that render a `<span class="badge
badge-transport">T</span>` badge with the full route type name as a
tooltip
- **packets.js**: Applied `transportBadge()` in all three packet row
render paths:
- Flat (ungrouped) packet rows
- Grouped packet header rows
- Grouped packet child rows
- **style.css**: Added `.badge-transport` class with amber styling and
CSS variable support (`--transport-badge-bg`, `--transport-badge-fg`)
for theme customization
### Backend (`cmd/server/`)
- **decoder_test.go**: Added 6 new tests covering:
- `TestDecodeHeader_TransportFlood` — verifies route type 0 decodes as
TRANSPORT_FLOOD
- `TestDecodeHeader_TransportDirect` — verifies route type 3 decodes as
TRANSPORT_DIRECT
- `TestDecodeHeader_Flood` — verifies route type 1 (non-transport)
decodes correctly
- `TestIsTransportRoute` — verifies the helper identifies transport vs
non-transport routes
- `TestDecodePacket_TransportFloodHasCodes` — verifies transport codes
are extracted from T_FLOOD packets
- `TestDecodePacket_FloodHasNoCodes` — verifies FLOOD packets have no
transport codes
## Visual
In the packets table Type column, transport packets now show:
```
[Channel Msg] [T] ← transport packet
[Channel Msg] ← normal flood packet
```
The "T" badge has an amber color scheme and shows the full route type
name on hover.
## Tests
- All Go tests pass (`cmd/server` and `cmd/ingestor`)
- All frontend tests pass (`test-packet-filter.js`, `test-aging.js`,
`test-frontend-helpers.js`)
- Cache busters bumped in `index.html`
---------
Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
## Summary
Several features and fixes from a live deployment of the Go v3.0.0
backend.
### geo_filter — full enforcement
- **Go backend config** (`cmd/server/config.go`,
`cmd/ingestor/config.go`): added `GeoFilterConfig` struct so
`geo_filter.polygon` and `bufferKm` from `config.json` are parsed by
both the server and ingestor
- **Ingestor** (`cmd/ingestor/geo_filter.go`, `cmd/ingestor/main.go`):
ADVERT packets from nodes outside the configured polygon + buffer are
dropped *before* any DB write — no transmission, node, or observation
data is stored
- **Server API** (`cmd/server/geo_filter.go`, `cmd/server/routes.go`):
`GET /api/config/geo-filter` endpoint returns the polygon + bufferKm to
the frontend; `/api/nodes` responses filter out any out-of-area nodes
already in the DB
- **Frontend** (`public/map.js`, `public/live.js`): blue polygon overlay
(solid inner + dashed buffer zone) on Map and Live pages, toggled via
"Mesh live area" checkbox, state shared via localStorage
### Automatic DB pruning
- Add `retention.packetDays` to `config.json` to delete transmissions +
observations older than N days on a daily schedule (1 min after startup,
then every 24h). Nodes and observers are never pruned.
- `POST /api/admin/prune?days=N` for manual runs (requires `X-API-Key`
header if `apiKey` is set)
```json
"retention": {
"nodeDays": 7,
"packetDays": 30
}
```
### tools/geofilter-builder.html
Standalone HTML tool (no server needed) — open in browser, click to
place polygon points on a Leaflet map, set `bufferKm`, copy the
generated `geo_filter` JSON block into `config.json`.
### scripts/prune-nodes-outside-geo-filter.py
Utility script to clean existing out-of-area nodes from the database
(dry-run + confirm). Useful after first enabling geo_filter on a
populated DB.
### HB column in packets table
Shows the hop hash size in bytes (1–4) decoded from the path byte of
each packet's raw hex. Displayed as **HB** between Size and Type
columns, hidden on small screens.
## Test plan
- [x] ADVERT from node outside polygon is not stored (no new row in
nodes or transmissions)
- [x] `GET /api/config/geo-filter` returns polygon + bufferKm when
configured, `{polygon: null, bufferKm: 0}` when not
- [x] `/api/nodes` excludes nodes outside polygon even if present in DB
- [x] Map and Live pages show blue polygon overlay when configured;
checkbox toggles it
- [x] `retention.packetDays: 30` deletes old transmissions/observations
on startup and daily
- [x] `POST /api/admin/prune?days=30` returns `{deleted: N, days: 30}`
- [x] `tools/geofilter-builder.html` opens standalone, draws polygon,
copies valid JSON
- [x] HB column shows 1–4 for all packets in grouped and flat view
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary
- fix packets initial load to honor persisted `meshcore-time-window`
before the filter UI is rendered
- keep the dropdown and effective query window in sync via a shared
`savedTimeWindowMin` value
- add a frontend regression test to ensure `loadPackets()` falls back to
persisted time window when `#fTimeWindow` is not yet present
- bump cache busters in `public/index.html`
## Root cause
`loadPackets()` could run before the filter bar existed, so
`document.getElementById('fTimeWindow')` was null and it fell back to
`15` minutes even though localStorage had a different saved value.
## Testing
- `node test-frontend-helpers.js`
- `node test-packet-filter.js`
- `node test-aging.js`
---------
Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
## Summary
Fixes BYOP modal stacking on the Packets page by preventing duplicate
global click handlers and enforcing a single BYOP overlay instance.
## Root cause
Packets page init could register document-level click handlers
repeatedly across SPA navigations. Clicking BYOP then spawned multiple
overlays, and each close action removed only one layer.
## Changes
- `public/packets.js`
- Added `bindDocumentHandler(...)` to de-duplicate document click
handlers.
- Applied it to packets action delegation, filter menu outside-click
close, and column menu close.
- Added `removeAllByopOverlays()` and call it before opening BYOP.
- Tagged BYOP overlay with `.byop-overlay` class.
- Updated close logic to remove all BYOP overlays in one click.
- Scoped BYOP result lookup to the active overlay
(`overlay.querySelector`).
- Added destroy cleanup for document handlers and stray BYOP overlays.
- `test-frontend-helpers.js`
- Added regression tests for:
- BYOP singleton overlay behavior
- one-click close removing all overlays
- document click handler de-dup logic
- `public/index.html`
- Bumped cache busters for JS/CSS assets.
## Validation
- `node test-frontend-helpers.js`
- `node test-packet-filter.js`
- `node test-aging.js`
All passed locally.
Fixes#249
Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Compared decoder.js against the MeshCore firmware source (Dispatcher.cpp,
Packet.h, Mesh.cpp, AdvertDataHelpers.h) and fixed all mismatches:
1. Field order: transport codes now parsed BEFORE path_length byte,
matching the spec: [header][transport_codes?][path_length][path][payload]
2. ACK payload: was incorrectly decoded as dest(1)+src(1)+ackHash(4).
Firmware shows ACK is just checksum(4) — no dest/src hashes.
3. TRACE payload: was incorrectly decoded as flags(1)+tag(4)+dest(6)+src(1).
Firmware shows tag(4)+authCode(4)+flags(1)+pathData.
4. ADVERT appdata: added missing feature1 (0x20 flag) and feature2
(0x40 flag) parsing — 2-byte fields between location and name.
5. Transport code field naming: renamed nextHop/lastHop to code1/code2
to match spec terminology (transport_code_1/transport_code_2).
6. Fixed incorrect field size labels in packets.js hex breakdown:
dest/src are 1 byte, MAC is 2 bytes (not 6B/6B/4B).
7. Fixed ANON_REQ/PATH comment typos (dest was listed as 6 bytes,
MAC as 4 bytes — both wrong, code was already correct).
All 329 tests pass (66 decoder + 263 spec/golden).
#210: Add role="img" aria-label to 9 Chart.js canvases in node-analytics.js
and observer-detail.js with descriptive labels.
#211: Add scope="col" to all <th> elements across analytics.js, audio-lab.js,
compare.js, node-analytics.js, nodes.js, observer-detail.js, observers.js,
and packets.js (40+ headers).
#212: Add aria-label to packet filter input and time window select in
packets.js. Add for/id associations to all customize.js inputs: branding,
theme colors, node/type colors, heatmap sliders, onboarding fields, and
export controls.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add detail-collapsed class to split-layout initial HTML so the empty
right panel is hidden before any packet is selected. The class is
already removed when a packet row is clicked and re-added when the
close button is pressed.
Add 3 tests verifying the detail pane starts collapsed and that
open/close toggling is wired correctly.
Bump cache busters.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
When a CHANNEL_MSG (GRP_TXT) can't be decrypted, the decoder now includes:
- channelHashHex: zero-padded uppercase hex string of the channel hash byte
- decryptionStatus: 'decrypted', 'no_key', or 'decryption_failed'
Frontend changes:
- Packet list preview shows '🔒 Ch 0xXX (no key)' or '(decryption failed)'
- Detail pane hex breakdown shows channel hash with status label
- Detail pane message area shows channel hash info for undecrypted packets
6 new decoder tests (58 total): channelHashHex formatting, decryptionStatus
for no keys, empty keys, bad keys, and short encrypted data.
Fixes#123
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Closes#125
When the ✕ close button (or Escape) is pressed, the detail pane now
fully hides via display:none (CSS class 'detail-collapsed' on the
split-layout container) so the packets table expands to 100% width.
Clicking a packet row removes the class and restores the detail pane.
Previously the pane only cleared its content but kept its 420px width,
leaving a blank placeholder that wasted ~40% of screen space.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Issues fixed:
- #127: Firefox copy URL - shared copyToClipboard() with execCommand fallback
- #125: Dismiss packet detail pane - close button with keyboard support
- #124: Customize window scrollbar - flex layout fix for overflow
- #122: Last Activity stale times - use last_heard || last_seen
Test improvements:
- E2E perf: replace 19 networkidle waits, cut navigations 14->7, remove 11 sleeps
- 8 new unit tests for copyToClipboard helper (47->55 in test-frontend-helpers)
- 1 new E2E test for packet pane dismiss
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Keep the 📍map link in the Location metadata row (goes to app map).
Remove the redundant 📍 Map pill in the hex breakdown (went to Google Maps).
One link, one style.
Was making N API calls per observer for ambiguous hops on every page load,
plus another per packet detail view. All hop resolution now uses the
client-side HopResolver which already handles ambiguous prefixes.
Eliminates the main perf regression.
- App Flags now shows human-readable type (Companion/Repeater/Room Server/Sensor)
instead of confusing individual flag names like 'chat, repeater'
- Boolean flags (location, name) shown separately after type: 'Room Server + location, name'
- Added Google Maps link on longitude row using existing detail-map-link style
- Move --status-green/yellow/red from home.css to style.css :root (light+dark)
- Replace hardcoded status colors in style.css (.tl-snr, .health-dot, .byop-err,
.badge-hash-*, .fav-star.on, .spark-fill) with CSS variable references
- Replace hardcoded colors in live.css (VCR mode, stat pills, fdc-link, playhead)
- Replace --primary/--bg-secondary/--text-primary/--text-secondary dead vars with
canonical --accent/--input-bg/--text/--text-muted in style.css, map.js, live.js,
traces.js, packets.js
- Fix nodes.js legend colors to use ROLE_COLORS globals instead of hardcoded hex
- Replace hardcoded hex in home.js (SNR), perf.js (indicators), map.js (accuracy
circles) with CSS variable references via getComputedStyle or var()
- Add --detail-bg to customizer (THEME_CSS_MAP, DEFAULTS, ADVANCED_KEYS, labels)
- Move font/mono out of ADVANCED_KEYS into separate Fonts section in customizer
- Remove debug console.log lines from customize.js
- Bump cache busters in index.html
theme-changed now dispatches theme-refresh event instead of
full navigate(). Map re-renders markers, packets re-renders
table rows. No teardown/rebuild, no flash.
Per-observer resolve in the WS handler made it async, which
broke the debounce callback (unhandled promise + race conditions).
Live packets now render immediately with global cache. Per-observer
resolution happens on initial load and packet detail only.
New packets arriving via WebSocket were only getting global
resolution. Now ambiguous hops in WS batches also get per-observer
server-side resolution before rendering.
Ambiguous hops in the list now get resolved per-observer via
batch server API calls. Cache uses observer-scoped keys
(hop:observerId) so the same 1-byte prefix shows different
names depending on which observer saw the packet.
Flow: global resolve first (fast, covers unambiguous hops),
then batch per-observer resolve for ambiguous ones only.
When packet doesn't have lat/lon directly (channel messages, DMs),
look up sender node from DB by pubkey or name. Use that GPS as
the origin anchor for hop disambiguation. We've seen ADVERTs from
these senders — use that known location.
Client-side HopResolver wasn't properly disambiguating despite
correct data. Switched detail view to use the server API directly:
/api/resolve-hops?hops=...&observer=...&originLat=...&originLon=...
Server-side resolution is battle-tested and handles regional
filtering + GPS-anchored disambiguation correctly.
List view resolves hops without anchor (no per-packet context).
Detail view now always re-resolves with the packet's actual GPS
coordinates + observer, overwriting stale cache entries.
Removed debug logging.
ADVERT packets have GPS coordinates — use them as the forward
pass anchor so the first hop resolves to the nearest candidate
to the sender, not random pick order.
The general hop cache was populated without observer context,
so all conflicts showed filterMethod=none. Now renderDetail()
re-resolves hops with pkt.observer_id, getting proper regional
filtering with distances and conflict flags.