- Compute hash_size from ADVERT packets in /api/nodes response
- Show colored rectangle markers with hash size (e.g. '2B') for repeaters
- Add 'Hash size labels' toggle in map controls (default ON, saved to localStorage)
- Non-repeater markers unchanged
After dedup migration, packet IDs from the legacy 'packets' table differ
from transmission IDs in the 'transmissions' table. URLs using numeric IDs
became invalid after restart when _loadNormalized() assigned different IDs.
Changes:
- All packet URLs now use 16-char hex hashes instead of numeric IDs
(#/packets/HASH instead of #/packet/ID)
- selectPacket() accepts hash parameter, uses hash-based URLs
- Copy Link generates hash-based URLs
- Search results link to hash-based URLs
- /api/packets/:id endpoint accepts both numeric IDs and 16-char hashes
- insert() now calls insertTransmission() to get stable transmission IDs
- Added db.getTransmission() for direct transmission table lookup
- Removed redundant byTransmission map (identical to byHash)
- All byTransmission references replaced with byHash
After dedup, table rows have transmission IDs but getById() maps
observation IDs. Added byTxId index so /api/packets/:id resolves
both observation and transmission IDs correctly.
After dedup migration, transmission IDs != old packet IDs.
Hash-based links (#/packets/HASH) are stable across the migration.
Affected: node detail, channel messages, live page packet cards.
- API returns observer_name + path_json per observation
- Timeline shows resolved names with links to #/observers/<id>
- Path Visualization replaced with SVG graph showing all observed paths
- Removed redundant Observer Details table (same data as timeline)
- GET /api/packets: returns transmissions with observation_count, strip
observations[] by default (use ?expand=observations to include)
- GET /api/packets/🆔 includes observation_count and observations[]
- GET /api/nodes/:pubkey/health: stats.totalTransmissions + totalObservations
(totalPackets kept for backward compat)
- GET /api/nodes/bulk-health: same transmission/observation split
- WebSocket broadcast: includes observation_count
- db.js getStats(): adds totalTransmissions count
- All backward-compatible: old field names preserved alongside new ones
- Add transmissions and observations schema to db.js init
- Add insertTransmission() function: upsert transmission by hash,
always insert observation row
- All 6 pktStore.insert() call sites in server.js now also call
db.insertTransmission() with try/catch (non-fatal on error)
- packet-store.js: add byTransmission Map index (hash → transmission
with observations array) for future M3 query migration
- Existing insertPacket() and all read paths unchanged
- Observer ID is uppercase, node pubkey is lowercase — added COLLATE NOCASE
- New /api/config/regions endpoint merges config regions + observed IATAs
- map.js and packets.js fetch regions from API instead of hardcoded maps
- Removed dead 'MQTT Connected Only' checkbox (never worked)
- Added 'observer' role type with purple star marker
- Observer locations computed from average of nodes they've seen
- Observer popup with name, IATA, packets, link to detail page
- Role filter checkbox includes observers with count
Client was matching field names that only exist on ADVERTs. Now sends
pubkeys to server, which uses findPacketsForNode() (byNode index +
text search) to find ALL packet types referencing those nodes.
Lincomatic MQTT packets include origin field with friendly name but
we were only using it in status handler. Now both packet and observer
records get the name.
Scans DB for known #channel names, derives 16-byte AES keys
algorithmically. No need to manually add hashtag channels to config.
Private channels still need manual config.
RF analytics filtered on snr!=null, showing only 3 packets when most
lincomatic data has no SNR. Now shows total packets prominently and
signal-data count as a separate stat.
One method resolves name→pubkey, combines byNode index + text search.
Used in: query fast-path, combined-filter path, health endpoint,
analytics endpoint. Bulk-health still uses index-only (perf).
byNode only has packets where full pubkey appears in decoded_json fields.
Channel messages reference nodes by name, not pubkey. Combined search
finds all 304 packets instead of just 12.
- GET /api/observers/:id — observer metadata + packet count
- GET /api/observers/:id/analytics — timeline, type breakdown, nodes heard, SNR distribution
- observer-detail.js — info cards, 4 Chart.js charts, recent packets table
- Observers list rows now clickable to navigate to detail
- Time range selector (24h, 3d, 7d, 30d)
- mqttSources[].topics is now an array of topic patterns
- mqttSources[].iataFilter optionally restricts to specific regions
- meshcore/<region>/<id>/status topic parsed for observer metadata:
name, model, firmware, client_version, radio, battery, uptime, noise_floor
- New observer columns with auto-migration for existing DBs
- Status updates don't inflate packet_count (separate updateObserverStatus)
config.mqttSources array allows multiple MQTT brokers with independent
topics, credentials, and TLS settings. Legacy config.mqtt still works
for backward compatibility.
Cache: entries stay valid for 2× TTL as stale. First request after
TTL serves stale data while recompute runs (guarded: one at a time).
No more cache stampedes.
/api/health returns:
- Process memory (RSS, heap)
- Event loop lag (p50/p95/p99/max, sampled every 1s)
- Cache stats (hit rate, stale hits, recomputes)
- WebSocket client count
- Packet store size
- Recent slow queries
Was doing 7 separate LIKE scans on SQLite (~552ms). Now reads from
pktStore.byNode index and computes all aggregations in JS.
Also added cache with TTL.nodeAnalytics.
Node.js is single-threaded. A 5s subpath computation in a background
timer blocks ALL concurrent requests. Stats endpoint went from 3ms
to 1.2s because it was waiting for a background refresh to finish.
Pre-warm on startup + long TTLs (30min-1hr) is sufficient. At most
one user per hour eats a cold compute cost.
No user ever hits a cold compute path. Background timers fire at 80%
of each TTL, hitting endpoints with ?nocache=1 to force recomputation
and re-cache the result.
Jobs: RF (24min), topology (24min), channels (24min), hash-sizes (48min),
subpaths ×4 (48min), bulk-health (8min).
node benchmark.js [--runs N] [--json]
Adds ?nocache=1 query param to bypass server cache for benchmarking.
Tests all 21 endpoints cached vs cold, shows speedup comparison.
Non-grouped mode: new packets from WebSocket are filtered client-side
and prepended to the table, no API call. Grouped mode still re-fetches
(group counts change). Server broadcast now includes full packet row.
Eliminates repeated /api/packets fetches on every incoming packet.
4 parallel subpath queries were each scanning 27K packets independently
(937ms + 1.99s + 3.09s + 6.19s). Now one shared computation builds all
subpath data, cached for 1hr. Subsequent queries just slice the result.
Pre-warmed on startup so first user never sees a cold call.