Commit Graph

84 Commits

Author SHA1 Message Date
you
c7e528331c remove self-referential hashtag channel key seeding from DB 2026-03-21 01:28:17 +00:00
Kpa-clawbot
e9b2dc7c00 Merge pull request #109 from lincomatic/prgraceful
graceful shutdown
2026-03-21 01:25:17 +00:00
Kpa-clawbot
5a36b8bf2e Merge pull request #105 from lincomatic/https
add https support
2026-03-21 01:25:15 +00:00
you
cf9d5e3325 fix: map labels show short hash ID (e.g. 5B, BEEF), better deconfliction with spiral offsets 2026-03-21 00:30:25 +00:00
you
1adc3ca41d Add hash size labels for repeater markers on map
- Compute hash_size from ADVERT packets in /api/nodes response
- Show colored rectangle markers with hash size (e.g. '2B') for repeaters
- Add 'Hash size labels' toggle in map controls (default ON, saved to localStorage)
- Non-repeater markers unchanged
2026-03-21 00:19:15 +00:00
you
ab22b98f48 fix: switch all user-facing URLs to hash-based for stability across restarts
After dedup migration, packet IDs from the legacy 'packets' table differ
from transmission IDs in the 'transmissions' table. URLs using numeric IDs
became invalid after restart when _loadNormalized() assigned different IDs.

Changes:
- All packet URLs now use 16-char hex hashes instead of numeric IDs
  (#/packets/HASH instead of #/packet/ID)
- selectPacket() accepts hash parameter, uses hash-based URLs
- Copy Link generates hash-based URLs
- Search results link to hash-based URLs
- /api/packets/:id endpoint accepts both numeric IDs and 16-char hashes
- insert() now calls insertTransmission() to get stable transmission IDs
- Added db.getTransmission() for direct transmission table lookup
- Removed redundant byTransmission map (identical to byHash)
- All byTransmission references replaced with byHash
2026-03-21 00:18:11 +00:00
you
9d65f041d4 fix: check transmission ID before observation ID in packet lookup 2026-03-20 23:58:23 +00:00
you
8ebce8087b fix: packet detail lookup uses transmission ID index
After dedup, table rows have transmission IDs but getById() maps
observation IDs. Added byTxId index so /api/packets/:id resolves
both observation and transmission IDs correctly.
2026-03-20 23:55:38 +00:00
you
15e80e56f1 fix: packet links use hash instead of observation ID
After dedup migration, transmission IDs != old packet IDs.
Hash-based links (#/packets/HASH) are stable across the migration.
Affected: node detail, channel messages, live page packet cards.
2026-03-20 23:40:30 +00:00
you
9713e972f9 traces: clickable observer names, path graph, remove redundant observer table
- API returns observer_name + path_json per observation
- Timeline shows resolved names with links to #/observers/<id>
- Path Visualization replaced with SVG graph showing all observed paths
- Removed redundant Observer Details table (same data as timeline)
2026-03-20 23:19:37 +00:00
you
aa35164252 M4: API response changes for dedup-normalize
- GET /api/packets: returns transmissions with observation_count, strip
  observations[] by default (use ?expand=observations to include)
- GET /api/packets/🆔 includes observation_count and observations[]
- GET /api/nodes/:pubkey/health: stats.totalTransmissions + totalObservations
  (totalPackets kept for backward compat)
- GET /api/nodes/bulk-health: same transmission/observation split
- WebSocket broadcast: includes observation_count
- db.js getStats(): adds totalTransmissions count
- All backward-compatible: old field names preserved alongside new ones
2026-03-20 20:49:34 +00:00
you
baa60cac0f M2: Dual-write ingest to transmissions/observations tables
- Add transmissions and observations schema to db.js init
- Add insertTransmission() function: upsert transmission by hash,
  always insert observation row
- All 6 pktStore.insert() call sites in server.js now also call
  db.insertTransmission() with try/catch (non-fatal on error)
- packet-store.js: add byTransmission Map index (hash → transmission
  with observations array) for future M3 query migration
- Existing insertPacket() and all read paths unchanged
2026-03-20 20:29:03 +00:00
lincomatic
ac8a6a4dc3 graceful shutdown 2026-03-20 10:58:37 -07:00
you
4f7b02a91c fix: centralize hardcoded values — roles, thresholds, colors, tiles, limits — closes #104
- New public/roles.js shared module: ROLE_COLORS, ROLE_LABELS, ROLE_STYLE,
  ROLE_EMOJI, ROLE_SORT, HEALTH_THRESHOLDS, TILE_DARK/LIGHT, SNR_THRESHOLDS,
  DIST_THRESHOLDS, MAX_HOP_DIST, LIMITS — all configurable via /api/config/roles
- Removed duplicate ROLE_COLORS from map.js, nodes.js, live.js, analytics.js
- Removed duplicate health thresholds from nodes.js, home.js, observer-detail.js
- Deduplicated CartoDB tile URLs (3 copies → 1 in roles.js)
- Removed hardcoded region names from map.js and packets.js
- channels.js uses ROLE_EMOJI/ROLE_LABELS instead of hardcoded emoji chains
- server.js reads healthThresholds from config.json with defaults
- Unknown roles get gray circle fallback instead of crashing
2026-03-20 17:36:41 +00:00
lincomatic
209e17fcd4 add https support 2026-03-20 09:21:02 -07:00
you
f0db317051 fix: observer locations case-insensitive match, regions from API not hardcoded
- Observer ID is uppercase, node pubkey is lowercase — added COLLATE NOCASE
- New /api/config/regions endpoint merges config regions + observed IATAs
- map.js and packets.js fetch regions from API instead of hardcoded maps
2026-03-20 16:10:16 +00:00
you
74a08d99b0 fix: observer location from nodes table direct join — observer ID = node pubkey 2026-03-20 15:11:57 +00:00
you
76d63ffe75 feat: observers as map markers (purple stars) with computed locations
- Removed dead 'MQTT Connected Only' checkbox (never worked)
- Added 'observer' role type with purple star marker
- Observer locations computed from average of nodes they've seen
- Observer popup with name, IATA, packets, link to detail page
- Role filter checkbox includes observers with count
2026-03-20 14:56:08 +00:00
you
9ef5c1a809 fix: My Nodes filter uses server-side findPacketsForNode for all packet types
Client was matching field names that only exist on ADVERTs. Now sends
pubkeys to server, which uses findPacketsForNode() (byNode index +
text search) to find ALL packet types referencing those nodes.
2026-03-20 09:30:41 +00:00
you
920eab04c1 fix: sort observer analytics packets newest-first, fix recentPackets slice 2026-03-20 09:10:16 +00:00
you
d67b531bf2 fix: pass observer name (msg.origin) to packet insert and observer upsert
Lincomatic MQTT packets include origin field with friendly name but
we were only using it in status handler. Now both packet and observer
records get the name.
2026-03-20 09:07:39 +00:00
you
5a6847bbf4 fix: remove duplicate crypto require that crashed server 2026-03-20 08:54:35 +00:00
you
034c68c154 feat: auto-derive hashtag channel keys from SHA256(name)
Scans DB for known #channel names, derives 16-byte AES keys
algorithmically. No need to manually add hashtag channels to config.
Private channels still need manual config.
2026-03-20 08:53:11 +00:00
you
477dcde82f fix: analytics shows total packets + signal data count separately
RF analytics filtered on snr!=null, showing only 3 packets when most
lincomatic data has no SNR. Now shows total packets prominently and
signal-data count as a separate stat.
2026-03-20 08:42:38 +00:00
you
c997318cd2 fix: recentPackets in health endpoint — show 20 newest DESC, not 10 oldest 2026-03-20 08:39:01 +00:00
you
8ce2262813 refactor: single findPacketsForNode() replaces 4 duplicate node lookups
One method resolves name→pubkey, combines byNode index + text search.
Used in: query fast-path, combined-filter path, health endpoint,
analytics endpoint. Bulk-health still uses index-only (perf).
2026-03-20 08:29:45 +00:00
you
90c4c03ac3 fix: node health endpoint searches by name + pubkey, not just byNode index
byNode only has packets where full pubkey appears in decoded_json fields.
Channel messages reference nodes by name, not pubkey. Combined search
finds all 304 packets instead of just 12.
2026-03-20 08:25:06 +00:00
you
d8c0e3a156 fix: only backfill observer name if observer already exists
Prevents ADVERT nodes from being created as observers
2026-03-20 07:47:37 +00:00
you
f58728118d feat: observer detail page with analytics
- GET /api/observers/:id — observer metadata + packet count
- GET /api/observers/:id/analytics — timeline, type breakdown, nodes heard, SNR distribution
- observer-detail.js — info cards, 4 Chart.js charts, recent packets table
- Observers list rows now clickable to navigate to detail
- Time range selector (24h, 3d, 7d, 30d)
2026-03-20 07:37:36 +00:00
you
4aa78305d3 feat: backfill observer names from ADVERT pubkey cross-reference 2026-03-20 07:32:46 +00:00
you
2713d501b4 feat: MQTT topic arrays, IATA filtering, observer status parsing
- mqttSources[].topics is now an array of topic patterns
- mqttSources[].iataFilter optionally restricts to specific regions
- meshcore/<region>/<id>/status topic parsed for observer metadata:
  name, model, firmware, client_version, radio, battery, uptime, noise_floor
- New observer columns with auto-migration for existing DBs
- Status updates don't inflate packet_count (separate updateObserverStatus)
2026-03-20 07:29:01 +00:00
you
4ff72935ca feat: multi-broker MQTT support
config.mqttSources array allows multiple MQTT brokers with independent
topics, credentials, and TLS settings. Legacy config.mqtt still works
for backward compatibility.
2026-03-20 07:18:49 +00:00
you
2e51e5f743 feat: system health + SWR stats in perf dashboard
Perf page now shows: heap usage, RSS, event loop p95/max/current,
WS client count, stale-while-revalidate hits, recompute count.
Color-coded: green/yellow/red based on thresholds.
2026-03-20 05:45:51 +00:00
you
11b398cfe1 feat: stale-while-revalidate cache + /api/health telemetry
Cache: entries stay valid for 2× TTL as stale. First request after
TTL serves stale data while recompute runs (guarded: one at a time).
No more cache stampedes.

/api/health returns:
- Process memory (RSS, heap)
- Event loop lag (p50/p95/p99/max, sampled every 1s)
- Cache stats (hit rate, stale hits, recomputes)
- WebSocket client count
- Packet store size
- Recent slow queries
2026-03-20 05:43:32 +00:00
you
77b7b218b1 perf: channels endpoint — single pass, no sort, no double filter
Was doing two pktStore.filter() calls + sort on each. Now single
loop over all packets with inline type check.
2026-03-20 05:31:03 +00:00
you
0a499745ec perf: pre-warm all heavy analytics endpoints on startup
Sequential self-requests after subpath pre-warm completes.
RF, topology, channels, hash-sizes, bulk-health all cached
before any user hits the page.
2026-03-20 05:29:10 +00:00
you
c83eb099c9 perf: stop calling db.getNode() in health/analytics endpoints
db.getNode() does a 4-way LIKE scan for recent packets we don't even
use. Direct SELECT on primary key instead. Saves ~110ms per call.
2026-03-20 05:17:06 +00:00
you
cd01da5a64 perf: hash-sizes analytics reads from memory store
Last remaining full-table scan on packets from SQLite.
All packet reads now go through pktStore (in-memory).
2026-03-20 05:13:13 +00:00
you
0b4590e48d perf: node detail uses in-memory packet index
Was doing 4-way LIKE scan for recent packets (~130ms).
Now reads from pktStore.byNode, slices last 20.
2026-03-20 05:12:00 +00:00
you
f5d377e396 perf: node health uses in-memory packet store
Was doing 6 LIKE scans on SQLite (~169ms). Now reads from
pktStore.byNode index, single pass over packets.
2026-03-20 05:11:00 +00:00
you
dc703ebf28 perf: node analytics uses in-memory packet store
Was doing 7 separate LIKE scans on SQLite (~552ms). Now reads from
pktStore.byNode index and computes all aggregations in JS.
Also added cache with TTL.nodeAnalytics.
2026-03-20 05:09:49 +00:00
you
89c1e84924 perf: bulk-health uses in-memory packet store index
Was doing 50-pattern LIKE OR scan on all packets in SQLite (~2s).
Now reads from pktStore.byNode index — O(1) lookup per node.
2026-03-20 05:08:23 +00:00
you
50b6124325 revert: remove background refresh jobs — blocks event loop
Node.js is single-threaded. A 5s subpath computation in a background
timer blocks ALL concurrent requests. Stats endpoint went from 3ms
to 1.2s because it was waiting for a background refresh to finish.

Pre-warm on startup + long TTLs (30min-1hr) is sufficient. At most
one user per hour eats a cold compute cost.
2026-03-20 04:56:57 +00:00
you
f08756a6ac perf: background refresh jobs — recompute expensive caches before TTL expires
No user ever hits a cold compute path. Background timers fire at 80%
of each TTL, hitting endpoints with ?nocache=1 to force recomputation
and re-cache the result.

Jobs: RF (24min), topology (24min), channels (24min), hash-sizes (48min),
subpaths ×4 (48min), bulk-health (8min).
2026-03-20 04:52:06 +00:00
you
44f9a95ec5 feat: benchmark suite + nocache bypass for cold compute testing
node benchmark.js [--runs N] [--json]
Adds ?nocache=1 query param to bypass server cache for benchmarking.
Tests all 21 endpoints cached vs cold, shows speedup comparison.
2026-03-20 04:23:34 +00:00
you
2edcca77f1 perf: RF endpoint from 1MB to ~15KB — server-side histograms, scatter downsampled to 500pts
Was sending 27K raw SNR/RSSI/size values (420KB) + 27K scatter points (763KB).
Now: histograms computed server-side (20-25 bins), scatter downsampled
to max 500 evenly-spaced points. Client histogram() accepts both formats.
2026-03-20 04:17:25 +00:00
you
4c6172bc6e perf: WS packets prepend client-side instead of re-fetching entire list
Non-grouped mode: new packets from WebSocket are filtered client-side
and prepended to the table, no API call. Grouped mode still re-fetches
(group counts change). Server broadcast now includes full packet row.
Eliminates repeated /api/packets fetches on every incoming packet.
2026-03-20 04:12:07 +00:00
you
d01fa7e17f perf: pre-warm all 4 subpath query variants on startup + dedup concurrent computation 2026-03-20 03:53:10 +00:00
you
35e86c34e0 perf: single-pass subpath computation + startup pre-warm
4 parallel subpath queries were each scanning 27K packets independently
(937ms + 1.99s + 3.09s + 6.19s). Now one shared computation builds all
subpath data, cached for 1hr. Subsequent queries just slice the result.
Pre-warmed on startup so first user never sees a cold call.
2026-03-20 03:51:58 +00:00
you
f8638974c7 perf: smart cache invalidation — only channels/observers on packet burst, node/health/analytics expire by TTL, node invalidated on ADVERT only 2026-03-20 03:48:55 +00:00