- New route #/packet/123 shows full packet detail on its own page
- Back link to packets list
- Copy Link button now generates #/packet/ID URLs
- Reuses existing renderDetail() for consistent display
Cache: entries stay valid for 2× TTL as stale. First request after
TTL serves stale data while recompute runs (guarded: one at a time).
No more cache stampedes.
/api/health returns:
- Process memory (RSS, heap)
- Event loop lag (p50/p95/p99/max, sampled every 1s)
- Cache stats (hit rate, stale hits, recomputes)
- WebSocket client count
- Packet store size
- Recent slow queries
Was doing 7 separate LIKE scans on SQLite (~552ms). Now reads from
pktStore.byNode index and computes all aggregations in JS.
Also added cache with TTL.nodeAnalytics.
Node.js is single-threaded. A 5s subpath computation in a background
timer blocks ALL concurrent requests. Stats endpoint went from 3ms
to 1.2s because it was waiting for a background refresh to finish.
Pre-warm on startup + long TTLs (30min-1hr) is sufficient. At most
one user per hour eats a cold compute cost.
No user ever hits a cold compute path. Background timers fire at 80%
of each TTL, hitting endpoints with ?nocache=1 to force recomputation
and re-cache the result.
Jobs: RF (24min), topology (24min), channels (24min), hash-sizes (48min),
subpaths ×4 (48min), bulk-health (8min).
Perf page now accessible from main nav (⚡ Perf).
Shows in-memory packet store metrics: packets in RAM, memory used/limit,
queries served, live inserts, evictions, index sizes.
node benchmark.js [--runs N] [--json]
Adds ?nocache=1 query param to bypass server cache for benchmarking.
Tests all 21 endpoints cached vs cold, shows speedup comparison.
Existing groups get count/observer_count incremented, latest timestamp
updated, longest path kept. New hashes get a new group prepended.
Expanded children updated inline. No more /api/packets re-fetch on
any incoming packet in either mode.
Non-grouped mode: new packets from WebSocket are filtered client-side
and prepended to the table, no API call. Grouped mode still re-fetches
(group counts change). Server broadcast now includes full packet row.
Eliminates repeated /api/packets fetches on every incoming packet.
4 parallel subpath queries were each scanning 27K packets independently
(937ms + 1.99s + 3.09s + 6.19s). Now one shared computation builds all
subpath data, cached for 1hr. Subsequent queries just slice the result.
Pre-warmed on startup so first user never sees a cold call.
Zero SQLite reads from packets table. Every endpoint that previously
scanned packets now reads from the in-memory PacketStore.
Expected: subpaths from 1.6s to <100ms, topology from 700ms to <50ms,
RF from 270ms to <30ms on cold calls.
- PacketStore loads all packets into memory on startup (~11MB for 27K packets)
- Indexed by id, hash, observer, and node pubkey for fast lookups
- /api/packets, /api/packets/timestamps, /api/packets/:id all served from RAM
- MQTT ingest writes to both RAM + SQLite
- Configurable maxMemoryMB (default 1024MB) in config.json packetStore section
- groupByHash queries computed in-memory
- Packet store stats exposed in /api/perf
- Expected: /api/packets goes from 77ms to <1ms
All cache TTLs now read from config.json cacheTTL section (seconds).
Client fetches config on load via GET /api/config/cache.
config.example.json updated with defaults.
Edit config.json, restart server — no code changes needed to tweak TTLs.