Commit Graph

236 Commits

Author SHA1 Message Date
you fceff15e2f feat: update URL bar when selecting a packet for easy sharing 2026-03-20 06:49:20 +00:00
you 9c87f0040e docs: update README — fix duplicate heading, add Docker/perf files to project structure 2026-03-20 06:47:07 +00:00
you 395abc2585 feat: standalone packet detail page at #/packet/ID
- New route #/packet/123 shows full packet detail on its own page
- Back link to packets list
- Copy Link button now generates #/packet/ID URLs
- Reuses existing renderDetail() for consistent display
2026-03-20 06:44:18 +00:00
you e82e4fe05f fix: copy link URL format — use #/packets/id/N not query param 2026-03-20 06:42:12 +00:00
you 6cf9793706 feat: copy link button in packet detail pane 2026-03-20 06:41:36 +00:00
you 1772b34e8f fix: copy all JS files in Dockerfile — was missing decoder.js 2026-03-20 06:18:04 +00:00
you fea8a7e0b5 feat: add Caddy to Docker container — automatic HTTPS
- Caddy reverse proxies :80/:443 → Node :3000
- Mount custom Caddyfile for your domain → auto Let's Encrypt TLS
- Caddy certs persisted in /data/caddy volume
- Ports: 80 (HTTP), 443 (HTTPS), 1883 (MQTT)
2026-03-20 06:07:38 +00:00
you 2e486e2a66 feat: Docker packaging — single container with Mosquitto + Node
- Dockerfile: Alpine + Node 22 + Mosquitto + supervisord
- Auto-copies config.example.json if no config.json mounted
- Named volume for data persistence (SQLite + Mosquitto)
- Ports: 3000 (web), 1883 (MQTT)
- .dockerignore excludes data, config, git, benchmarks
- README updated with Docker quickstart
2026-03-20 06:06:15 +00:00
you f0c29b38f1 chore: bump perf.js cache buster 2026-03-20 05:47:29 +00:00
you 46d9b690ee fix: close if(health) block in perf dashboard — was swallowing all content 2026-03-20 05:47:11 +00:00
you 2e51e5f743 feat: system health + SWR stats in perf dashboard
Perf page now shows: heap usage, RSS, event loop p95/max/current,
WS client count, stale-while-revalidate hits, recompute count.
Color-coded: green/yellow/red based on thresholds.
2026-03-20 05:45:51 +00:00
you 11b398cfe1 feat: stale-while-revalidate cache + /api/health telemetry
Cache: entries stay valid for 2× TTL as stale. First request after
TTL serves stale data while recompute runs (guarded: one at a time).
No more cache stampedes.

/api/health returns:
- Process memory (RSS, heap)
- Event loop lag (p50/p95/p99/max, sampled every 1s)
- Cache stats (hit rate, stale hits, recomputes)
- WebSocket client count
- Packet store size
- Recent slow queries
2026-03-20 05:43:32 +00:00
you f4ac789ee9 release: v2.1.0 — Performance
Two-layer caching: in-memory packet store + TTL response cache.
All packet reads from RAM, SQLite write-only.

Highlights:
- Bulk Health: 7,059ms → 1ms (7,059×)
- Node Analytics: 381ms → 1ms (381×)
- Topology: 685ms → 2ms (342×)
- RF Analytics: 253ms → 1ms (253×)
- Channels: 206ms → 1ms (206×)
- Node Health/Detail: 133-195ms → 1ms

Architecture:
- In-memory packet store with Map indexes (byNode, byHash, byObserver)
- Ring buffer with configurable max (1GB default, ~2.3M packets)
- Smart cache invalidation (packet bursts don't nuke analytics)
- Pre-warm all heavy endpoints on startup
- Eliminated every LIKE '%pubkey%' full-table scan
- All TTLs configurable via config.json
- A/B benchmark script included
- Favicon added
2026-03-20 05:38:23 +00:00
you 2a2a80b4ea chore: add A/B benchmark script, remove worker thread experiments 2026-03-20 05:37:08 +00:00
you 6dd077be13 feat: add favicon — mesh network icon (SVG + ICO) 2026-03-20 05:36:32 +00:00
you 2b3597dff1 fix: null guard getElementById in animatePacket
Elements don't exist yet when replayRecent fires during init.
2026-03-20 05:34:04 +00:00
you 77b7b218b1 perf: channels endpoint — single pass, no sort, no double filter
Was doing two pktStore.filter() calls + sort on each. Now single
loop over all packets with inline type check.
2026-03-20 05:31:03 +00:00
you 0a499745ec perf: pre-warm all heavy analytics endpoints on startup
Sequential self-requests after subpath pre-warm completes.
RF, topology, channels, hash-sizes, bulk-health all cached
before any user hits the page.
2026-03-20 05:29:10 +00:00
you c83eb099c9 perf: stop calling db.getNode() in health/analytics endpoints
db.getNode() does a 4-way LIKE scan for recent packets we don't even
use. Direct SELECT on primary key instead. Saves ~110ms per call.
2026-03-20 05:17:06 +00:00
you cd01da5a64 perf: hash-sizes analytics reads from memory store
Last remaining full-table scan on packets from SQLite.
All packet reads now go through pktStore (in-memory).
2026-03-20 05:13:13 +00:00
you 0b4590e48d perf: node detail uses in-memory packet index
Was doing 4-way LIKE scan for recent packets (~130ms).
Now reads from pktStore.byNode, slices last 20.
2026-03-20 05:12:00 +00:00
you f5d377e396 perf: node health uses in-memory packet store
Was doing 6 LIKE scans on SQLite (~169ms). Now reads from
pktStore.byNode index, single pass over packets.
2026-03-20 05:11:00 +00:00
you dc703ebf28 perf: node analytics uses in-memory packet store
Was doing 7 separate LIKE scans on SQLite (~552ms). Now reads from
pktStore.byNode index and computes all aggregations in JS.
Also added cache with TTL.nodeAnalytics.
2026-03-20 05:09:49 +00:00
you 89c1e84924 perf: bulk-health uses in-memory packet store index
Was doing 50-pattern LIKE OR scan on all packets in SQLite (~2s).
Now reads from pktStore.byNode index — O(1) lookup per node.
2026-03-20 05:08:23 +00:00
you 50b6124325 revert: remove background refresh jobs — blocks event loop
Node.js is single-threaded. A 5s subpath computation in a background
timer blocks ALL concurrent requests. Stats endpoint went from 3ms
to 1.2s because it was waiting for a background refresh to finish.

Pre-warm on startup + long TTLs (30min-1hr) is sufficient. At most
one user per hour eats a cold compute cost.
2026-03-20 04:56:57 +00:00
you f08756a6ac perf: background refresh jobs — recompute expensive caches before TTL expires
No user ever hits a cold compute path. Background timers fire at 80%
of each TTL, hitting endpoints with ?nocache=1 to force recomputation
and re-cache the result.

Jobs: RF (24min), topology (24min), channels (24min), hash-sizes (48min),
subpaths ×4 (48min), bulk-health (8min).
2026-03-20 04:52:06 +00:00
you c2bc07bb4a feat: live A/B benchmark — launches SQLite-only vs in-memory servers
NO_MEMORY_STORE=1 env var makes packet-store fall through to SQLite
for all reads. Benchmark spins up both servers on temp ports and
compares: SQLite cold, Memory cold, Memory cached.

Results on 27K packets (ARM64):
  Subpaths 5-8: SQLite 4.7s → cached 1.1ms (4,273×)
  Bulk health:  SQLite 1.8s → cached 1.7ms (1,059×)
  Topology:     SQLite 1.1s → cached 3.0ms (367×)
  Channels:     SQLite 617ms → cached 1.9ms (325×)
  RF Analytics: SQLite 448ms → cached 1.6ms (280×)
2026-03-20 04:47:31 +00:00
you e589fd959a feat: benchmark compares against pre-optimization baseline
Stores pre-optimization /api/perf measurements (pure SQLite, 27K packets)
in benchmark-baseline.json. Benchmark suite auto-loads and shows side-by-side:

Highlights:
  Subpaths 5-8 hop: 6,190ms → 1.1ms (5,627× faster)
  Hash sizes:          430ms → 1.3ms (331× faster)
  Topology:            697ms → 2.8ms (249× faster)
  RF analytics:        272ms → 1.6ms (170× faster), 1MB → 22KB
  Packets:              78ms → 3ms (26× faster)
  Channels:             60ms → 1.5ms (40× faster)
  Bulk health:       1,610ms → 67ms (24× faster)
2026-03-20 04:30:18 +00:00
you 706227b106 feat: add Perf dashboard to nav bar, show packet store stats
Perf page now accessible from main nav ( Perf).
Shows in-memory packet store metrics: packets in RAM, memory used/limit,
queries served, live inserts, evictions, index sizes.
2026-03-20 04:25:05 +00:00
you 44f9a95ec5 feat: benchmark suite + nocache bypass for cold compute testing
node benchmark.js [--runs N] [--json]
Adds ?nocache=1 query param to bypass server cache for benchmarking.
Tests all 21 endpoints cached vs cold, shows speedup comparison.
2026-03-20 04:23:34 +00:00
you b481df424f docs: add PERFORMANCE.md with before/after benchmarks 2026-03-20 04:21:53 +00:00
you 2edcca77f1 perf: RF endpoint from 1MB to ~15KB — server-side histograms, scatter downsampled to 500pts
Was sending 27K raw SNR/RSSI/size values (420KB) + 27K scatter points (763KB).
Now: histograms computed server-side (20-25 bins), scatter downsampled
to max 500 evenly-spaced points. Client histogram() accepts both formats.
2026-03-20 04:17:25 +00:00
you cd678d492d perf: grouped mode also updates client-side from WS — zero API fetches
Existing groups get count/observer_count incremented, latest timestamp
updated, longest path kept. New hashes get a new group prepended.
Expanded children updated inline. No more /api/packets re-fetch on
any incoming packet in either mode.
2026-03-20 04:15:04 +00:00
you 4c6172bc6e perf: WS packets prepend client-side instead of re-fetching entire list
Non-grouped mode: new packets from WebSocket are filtered client-side
and prepended to the table, no API call. Grouped mode still re-fetches
(group counts change). Server broadcast now includes full packet row.
Eliminates repeated /api/packets fetches on every incoming packet.
2026-03-20 04:12:07 +00:00
you d01fa7e17f perf: pre-warm all 4 subpath query variants on startup + dedup concurrent computation 2026-03-20 03:53:10 +00:00
you 35e86c34e0 perf: single-pass subpath computation + startup pre-warm
4 parallel subpath queries were each scanning 27K packets independently
(937ms + 1.99s + 3.09s + 6.19s). Now one shared computation builds all
subpath data, cached for 1hr. Subsequent queries just slice the result.
Pre-warmed on startup so first user never sees a cold call.
2026-03-20 03:51:58 +00:00
you f8638974c7 perf: smart cache invalidation — only channels/observers on packet burst, node/health/analytics expire by TTL, node invalidated on ADVERT only 2026-03-20 03:48:55 +00:00
you 1be6b4f4ad perf: ALL packet reads from RAM — analytics, channels, topology, subpaths, RF, observers
Zero SQLite reads from packets table. Every endpoint that previously
scanned packets now reads from the in-memory PacketStore.
Expected: subpaths from 1.6s to <100ms, topology from 700ms to <50ms,
RF from 270ms to <30ms on cold calls.
2026-03-20 03:43:23 +00:00
you d8d0572abb perf: in-memory packet store — all reads from RAM, SQLite write-only
- PacketStore loads all packets into memory on startup (~11MB for 27K packets)
- Indexed by id, hash, observer, and node pubkey for fast lookups
- /api/packets, /api/packets/timestamps, /api/packets/:id all served from RAM
- MQTT ingest writes to both RAM + SQLite
- Configurable maxMemoryMB (default 1024MB) in config.json packetStore section
- groupByHash queries computed in-memory
- Packet store stats exposed in /api/perf
- Expected: /api/packets goes from 77ms to <1ms
2026-03-20 03:38:37 +00:00
you de658bfb0d perf: configurable cache TTLs via config.json — server + client fetch from /api/config/cache
All cache TTLs now read from config.json cacheTTL section (seconds).
Client fetches config on load via GET /api/config/cache.
config.example.json updated with defaults.
Edit config.json, restart server — no code changes needed to tweak TTLs.
2026-03-20 03:23:58 +00:00
you 720d019a28 perf: align cache TTLs with real data rates — analytics 30min-1hr, nodes 5min, chat 10-15s, stats 10s, server debounce 30s 2026-03-20 03:20:33 +00:00
you ce030c91f7 perf: bump analytics cache to 5min, subpaths to 10min, cache subpath-detail 2026-03-20 02:24:46 +00:00
you 99ef07ca05 fix: debounce client cache invalidation (5s window) — same issue as server 2026-03-20 02:23:14 +00:00
you 141c28231e fix: debounce server cache invalidation (5s window), fix client cache stat reporting 2026-03-20 02:15:18 +00:00
you 2b7ed064d1 perf page: show server + client cache stats 2026-03-20 02:10:27 +00:00
you 415440d36d merge: server + frontend perf optimizations 2026-03-20 02:07:54 +00:00
you 5832c73a0d perf: add TTL cache layer + rewrite bulk-health to single-query
- Add TTLCache class with hit/miss tracking
- Cache all expensive endpoints:
  - analytics/* endpoints: 60s TTL
  - channels: 30s TTL
  - channels/:hash/messages: 15s TTL
  - nodes/:pubkey: 30s TTL
  - nodes/:pubkey/health: 30s TTL
  - observers: 30s TTL
  - bulk-health: 60s TTL
- Invalidate all caches on new packet ingestion (POST + MQTT)
- Rewrite bulk-health from N×5 queries to 1 query + JS matching
- Add cache stats (size, hits, misses, hitRate) to /api/perf
2026-03-20 02:06:23 +00:00
you e98e04553a feat: add frontend API response caching with TTL, in-flight dedup, and WebSocket invalidation
- Replace api() with caching version supporting TTL and request deduplication
- Add appropriate TTLs to all api() call sites across all frontend JS files:
  - /stats: 5s TTL (was called 962 times in 3 min)
  - /nodes/:pubkey: 15s, /health: 30s, /observers: 30s
  - /channels: 15s, messages: 10s
  - /analytics/*: 60s, /bulk-health: 60s, /network-status: 60s
  - /nodes?*: 10s
- Skip caching for real-time endpoints (/packets, /resolve-hops, /perf)
- Invalidate /stats, /nodes, /channels caches on WebSocket messages
- Deduplicate in-flight requests (same path returns same promise)
- Add cache hit rate to window.apiPerf() console debugging
- Update all cache busters in index.html
2026-03-20 02:03:25 +00:00
you 8587286896 merge: performance instrumentation 2026-03-20 01:49:19 +00:00
you 4fff11976e feat: performance instrumentation — server timing middleware, client API tracking, /api/perf endpoint, #/perf dashboard 2026-03-20 01:34:25 +00:00